Query for the size of the database
I need a query that should display the database size and the size of the database a month back. I tried with the OEM, but if the database goes down between the two, it will not display the result. (OS-HP-Unix, Linux, worm - 8i, 9i, 10g)Check this query:
SELECT ADD_MONTHS (SYSDATE, -1),SUM (BYTES) / 1073741824
FROM v$datafile
WHERE creation_time < ADD_MONTHS (SYSDATE, -1)
UNION ALL
SELECT sysdate, SUM (BYTES) / 1073741824
FROM v$datafile;
- - - - - - - - - - - - - - - - - - - - -
Kamran Agayev a. (10g OCP)
http://kamranagayev.WordPress.com
Tags: Database
Similar Questions
-
Single SQL query for the analysis of the date of customs declaration under the table of Stock codes
Dear all,
Please tell us a single SQL query for the below,
We have a Table of Stock as shown below,
STOCK_TABLE
ITEM_CODE
(item code)
BAT_NO
(lot no.)
TXN_CODE
(transaction code)
DOC_NO
(number)
BOE_DT
(date of the customs declaration)
I1
B1
I1
I2
I3
B70
I4
B80
I5
B90
T102
1234
JULY 2, 2015
I6
B100
We have to find the date of customs declaration (i.e. the date when the items have come under this particular table) for items that are not attached to any document (that is, who have TXN_CODE, DOC_NO and BOE_DT fields with a NULL value).
For each item in the table of actions, which is not attached to any document, the customs declaration date is calculated as follows.
- If (code section, lot number) combination is present under HISTORY_TABLE, the date of customs declaration will receive the UPDT_DT, the transaction code (TXN_CODE) is an IN or transactions (which can be analyzed from the TRANSACTIONS table).
- If (code section, lot number) combination is NOT currently at the HISTORY_TABLE (or) the transaction code respective to item - batch number combination code is an operation then customs declaration date will be the date of the document (DOC_DT) that we receive from one of the 3 tables IN_TABLE_HEAD that contains the element of that particular lot.
- If the case 1 and case 2 fails, our customs declaration date will be the last date of document (DOC_DT) that we receive from one of the 3 tables IN_TABLE_HEAD containing that particular item and the BAT_NO in expected results will be that corresponding to this document, as appropriate, to another NULL.
- If the case 1 or case 2 is successful, the value of the last field (in the output expected, shown further below) BATCH_YN will be 'Y', because it fits the lot. Otherwise it will be 'n'.
-
ORA-16783: could not resolve the deficit for the database
I have two databases emadb and emadbdg, governed by Data Guard. emadb is currently principal. emadbdg is currently pending physical.
ORA16783 - cannot solve the gap for the database is one I think, is the origin of the problem. Anyone can help solve the problem below. Logs attached.
Output of data protection
DGMGRL > show detailed configuration
Configuration - DRSolution
Protection mode: MaxAvailability
Databases:
emadb - primary database
Error: ORA-16825: multiple errors or warnings, including failover quick start or warnings, errors detected for the database
emadbdg - (*) Physical standby database
WARNING: ORA-16817: configuration not synchronized rapid failover
(*) Fast failover target
Properties:
FastStartFailoverThreshold = "30"
OperationTimeout = "30"
FastStartFailoverLagLimit = "30"
CommunicationTimeout = "180"
FastStartFailoverAutoReinstate = 'TRUE '.
FastStartFailoverPmyShutdown = "FALSE".
BystandersFollowRoleChange = "ALL".
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: emadbdg
Observer: emarn1
Offset limit: 30 seconds (do not use)
Primary closure: FALSE
Auto-Rétablir: TRUE
The configuration status:
ERROR
DGMGRL > see the detailed database emadb
Database - emadb
Role: PRIMARY
State of destination: TRANSPORT-WE
Occurrence (s):
emadb
Database error (s):
ORA-16783: could not resolve the deficit for the database emadbdg
Warning (s) of database:
ORA-16817: unsynchronized fast-start failover configuration
Properties:
DGConnectIdentifier = "emadb.
ObserverConnectIdentifier = "
LogXptMode = "SYNCHRONIZE".
DelayMins = '0'
Binding = "optional."
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = "300"
NetTimeout = "30"
RedoCompression = "DISABLE."
LogShipping = 'ON '.
PreferredApplyInstance = "
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO '.
StandbyFileManagement = 'AUTO '.
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = "4"
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'emadbdg, emadb '.
LogFileNameConvert = "/ opt/app/oracle/oradata/emadbdg, / opt/app/oracle/oradata/emadb '"
FastStartFailoverTarget = "emadbdg".
InconsistentProperties = "(monitor).
InconsistentLogXptProps = "(monitor).
SendQEntries = "(monitor).
LogXptStatus = "(monitor).
RecvQEntries = "(monitor).
Nom_sid = "emadb.
StaticConnectIdentifier = ' (DESCRIPTION = (ADDRESS = (PROTOCOL = tcp)(HOST=emarn1) (PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = emadb_DGMGRL) (INSTANCE_NAME = emadb)(SERVER=DEDICATED)))'
StandbyArchiveLocation = "/ opt/app/oracle/oradata/emadb/archivelog1.
AlternateLocation = "
LogArchiveTrace = '0'
LogArchiveFormat = '% t_%s_%r.dbf '.
TopWaitEvents = "(monitor).
State of the database:
ERROR
DGMGRL > see the detailed database emadbdg
Database - emadbdg
Role: STANDBY PHYSICS
State of destination: apply
Transport delay: (unknown)
Apply the Lag: (unknown)
Real-time query: OFF
Occurrence (s):
emadbdg
Warning (s) of database:
ORA-16817: unsynchronized fast-start failover configuration
Properties:
DGConnectIdentifier = "emadbdg".
ObserverConnectIdentifier = "
LogXptMode = "SYNCHRONIZE".
DelayMins = '0'
Binding = "OPTIONAL."
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = "300"
NetTimeout = "30"
RedoCompression = "DISABLE."
LogShipping = 'ON '.
PreferredApplyInstance = "
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO '.
StandbyFileManagement = 'AUTO '.
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = "4"
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'emadb, emadbdg '.
LogFileNameConvert = "/ opt/app/oracle/oradata/emadb, / opt/app/oracle/oradata/emadbdg '"
FastStartFailoverTarget = "emadb.
InconsistentProperties = "(monitor).
InconsistentLogXptProps = "(monitor).
SendQEntries = "(monitor).
LogXptStatus = "(monitor).
RecvQEntries = "(monitor).
Nom_sid = "emadbdg".
StaticConnectIdentifier = ' (DESCRIPTION = (ADDRESS = (PROTOCOL = tcp)(HOST=emarn2) (PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = emadbdg_DGMGRL) (INSTANCE_NAME = emadbdg)(SERVER=DEDICATED)))'
StandbyArchiveLocation = "/ opt/app/oracle/oradata/emadbdg/archivelog1.
AlternateLocation = "
LogArchiveTrace = '0'
LogArchiveFormat = '% t_%s_%r.dbf '.
TopWaitEvents = "(monitor).
State of the database:
WARNING
DGMGRL >
DGMGRL >
DGMGRL >
DGMGRL >
DGMGRL > outputAlex Antony Samantha wrote:
Head nodeSQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination /opt/app/oracle/oradata/emadb/archivelog1 Oldest online log sequence 65 Next log sequence to archive 67 Current log sequence 67 SQL> select thread#,max(sequence#) from v$archived_log group by thread#; THREAD# MAX(SEQUENCE#) ---------- -------------- 1 925
It is misleading to the column sequence, sequence of real number is 65 series, but it's beyond. Have you restored any old backup with the incarnation?
When you perform incremental backups, is the current_scn in the two primaries and Eve was balanced?And the sequence number * 29 * is transferred from the primary to the standby (or) he has been removed from the primary?
Perform the two methods.(1) SQL > alter system set log_archive_dest_state_2 = 'reporter ';
(2) conduct 3-4 log switches
(3) SQL > alter system set log_archive_dest_state_2 = 'enable '.and displays the alert log database and backup files.
And you mentioned the hostname in the entries of the listener, if this entry is added in/etc/hosts? otherwise you can use the IP address instead of the host name and then reload the listener.second method
(1) copy archives missing from primary sequence of 29
(2) place manually and then perform a recovery
or
(3) retrieve manuallyAnd update with your conclusions after all these lists.
Thank you. -
An error occurred when querying for the pending operations
Original title: sysprep problem
I have an acer aspire 5738Z, I use windows 7 (64 bit) I try to open sysprep.exe and it does not open, a text box appears saying "an error occurred when querying for the pending operations. What can I do to fix this?Hello
Thanks for posting the request in the Microsoft community forums.
I understand that you receive the error "an error has occurred when querying for outstanding operations" when trying to open sysprep.exe on the computer.You can try the solutions provide and check if it helps solve the issue.
Method 1:
You can scan System File Checker to fix corrupted files.
How to use the System File Checker tool to fix the system files missing or corrupted on Windows Vista or Windows 7
http://support.Microsoft.com/kb/929833Method 2:
If the steps above fail then you can try the steps and check.a. run regedit by typing regedit in start search by pressing ENTER.
b navigate to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
Key: RegistrySizeLimit
Type: REG_DWORD
Value: 0xffffff (4294967295)
c. reset.If you need help with Windows, keep us informed. We will be happy to help you.
-
A design of query for the conversion of time difference in days, hours, Minutes
Hi all
A design of query for the conversion of time difference of time in number of days remaining remaining hours minutes and rest in seconds. Made this one till now. Please suggest for all modifications, until now, it seems to work very well, kindly highlight for any anomaly.
WITH DATA (startDAte, EndDate, Datediff) AS (SELECT to_date ('2015-10-01 10:00:59 ',' yyyy-mm-dd hh24:mi:ss'), to_date ('2015-20-01 03:00:49 ',' yyyy-mm-dd hh24:mi:ss'), to_date('2015-10-01 10:00','yyyy-dd-mm hh24:mi:ss')-to_date('2015-20-01 03:00','yyyy-dd-mm hh24:mi:ss') FROM dual)
UNION ALL SELECT to_date ('2015-10-01 10:00:39 ',' yyyy-mm-dd hh24:mi:ss'), to_date ('2015-20-01 03:00:40 ',' yyyy-mm-dd hh24:mi:ss'), to_date('2015-10-01 10:00','yyyy-dd-mm hh24:mi:ss')-to_date('2015-20-01 03:00','yyyy-dd-mm hh24:mi:ss') FROM dual
UNION ALL SELECT to_date ('2015-11-01 10:30:45 ',' yyyy-mm-dd hh24:mi:ss'), to_date ('2015-11-01 11:00:50 ',' yyyy-mm-dd hh24:mi:ss'), to_date('2015-11-01 10:30','yyyy-dd-mm hh24:mi:ss')-to_date ('2015-11-01 11:00 ',' yyyy-mm-dd hh24:mi:ss') FROM dual
UNION ALL SELECT to_date ('2015-11-01 09:00:50 ',' yyyy-mm-dd hh24:mi:ss'), to_date ('2015-11-01 10:00:59 ',' yyyy-mm-dd hh24:mi:ss'), to_date('2015-11-01 09:00','yyyy-dd-mm hh24:mi:ss')-to_date ('2015-11-01 10:00 ',' yyyy-mm-dd hh24:mi:ss') FROM dual
UNION ALL SELECT to_date ('2015-11-01 08:30:49 ',' yyyy-mm-dd hh24:mi:ss'), to_date ('2015-11-01 09:30:59 ',' yyyy-mm-dd hh24:mi:ss'), to_date('2015-11-01 08:30','yyyy-dd-mm hh24:mi:ss')-to_date('2015-11-01 09:30','yyyy-dd-mm hh24:mi:ss') FROM dual
)
Select
trunc ((EndDate-StartDate)) days.
trunc (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24) hours)
trunc (to_number (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24-trunc (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24)) * 60) Minutes,))
(to_number (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24-trunc (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24)) * 60 - trunc (to_number (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24-trunc (((enddate-startdate)-to_number (trunc ((enddate-startdate))) * 24)) * 60)) * 60 seconds))))
data;
Thanks for the answers in advance.
AHA!
TO_TIMESTAMP expects a string as input, so it first makes an implicit conversion from DATE to a string, in the format of NSL_DATE_FORMAT.
To convert the TIMESTAMP DATE independently NLS_DATE_FORMAT, use
CAST (
AS TIMESTAMP) -
Dynamic problem with lookup-query for the purpose of resource request
Hi all
I need to set up several IOM user exchange mailboxes, I can set up AD account and account of the mailbox without any problem, but only for the first ad and Exchange account. For the second and third, etc. I get the error: "Invalid login name" during Exchange of account provisioning. I discovered that this problem exists with Exchange Connector - it is not able to collect correct GUID. So in my xml Dataset I use dynamic query Lookup to select manually correct Alias, the login name and GUID. The query for the GUID is the sequel (I cloned RO for AD and Exchnage):
AttributeReference available-in-bulk = "true" length = "32" widget = 'search query' type = 'String' attr-ref = "Object GUID" name = "Object GUID" >
* < lookupQuery search-query = "select distinct UD_KFUSER_OBJECTGUID GUID, ud_KFUSER_uid like UD_KFUSER UD_KFUSER, orc orc Login, sta sta where UD_KFUSER.orc_key = orc.orc_key and orc.usr_key = ' $Form data." Take ' and UD_KFUSER. "UD_KFUSER_AD = 27 and orc.orc_status = STA.sta_status AND STA.sta_bucket! = 'Cancelled'" display-field = "GUID" save-field = "Object GUID" / > *.
* < / AttributeReference > *.
My questions are:
1. I have to type * to run the query in the user interface, without * I got error:
+ < 17 February 2012 11:12:22 THIS > < error > < oracle.adfinternal.view.faces.config.ric +.
h.RegistrationConfigurator > < BEA-000000 > < ADF_FACES - 60096:Server Exception durin
PPR, #10 g
oracle.iam.platform.canonic.base.NoteException: an error occurred during executin
g the search query.
to oracle.iam.platform.canonic.agentry.GenericEntityLookupActor.perform)
GenericEntityLookupActor.java:337)
Is this right?
2. when I got correct values (from the search query) - they are missing on the details of the application and form of RO - what Miss me?
I use OIM 11.1.1.5, in my xml dataset I use correct attr-Ref (labels), when I type the values manually, they are propagated to form RO and Exchange mailbox is created.
Best
MPI not had no problem when writing search query.
This works very well for me.
The request will be filled for the field, so why choose *?
I used as
-
ashrpt - ORA-20200: samples of ASH NO exist for the DATABASE/Instance
Hello guyes,
Please could you help me with the generation of the report of ASHES? I have the stack on:
"ORA-20200: NO samples of ASH does exist for the DATABASE Instance / '.
I tried to find how to sample, but the only things I found is for statspack or awr.
Thanks a lot :)redy007 wrote:
sb92075:
SYS@PMBTEST > select * from v version $;BANNER
--------------------------------------------------------------------------------
Oracle Database 11 g Release 11.2.0.2.0 - 64 bit version of PL/SQL Production 11.2.0.2.0 - Production
CORE Production 11.2.0.2.0
AMT for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - ProductionThe active Session history exists only in the Enterprise edition.
-
How to write a query for the data exchange between two columns?
How to write a query for the data exchange between two columns?
I tried a request, does NOT work.
Thank you.update tmp t1 set t1.m1=t1.m2 and t1.m2=(select t2.m1 from tmp t2 where t2.student_id = t1.student_id)
Published by: user533361 on October 23, 2009 14:04Just plain and simple:
update tmp t1 set t1.m1=t1.m2, t1.m2=t1.m1 /
SY.
-
How to determine the level of Group of patches for the database Oracle EBS Server?
Dear
How findout patches for the database server Oracle EBS group level i.e. 10.2.0.3?
ConcerningDear Suzy,
I was looking for a patch of database being upgraded. After the database owner (enviroment) to supply file, I used the following command from $ORACLE_RDBMS_HOME/OPatch directory, command like $ OPatch lsinventory, he listed all the patches and updated patch also.
'OPatch lsinventory' must list all patches applied to $RDBMS_ORACLE_HOME. For the level of database of the Group of hotfixes, you can use one of the instructions above to get the version. In addition, you can check to the BONE by running executable files (i.e. sqlplus, impdb... etc).
And on request, I used-> -use $AD_TOP/sql/adutconf.sql , it is perfectly correct.
Ok.
But I can't go through of OAM,-diagnostics of Applications using, you can guide me. When I select plan of the site-diagnosis & repair - run diagnostic tests, it error. Say year error has occurred!
Please note: you don't have sufficient privileges to perform this function.OAM and Diagnostics Applications are two different things.
Diagnostics of applications can be run from "Oracle Diagnostic Tool".
Note: 358831,1 - E-Business Suite Diagnostics run Instructions
https://metalink2.Oracle.com/MetaLink/PLSQL/ml2_documents.showDocument?p_database_id=not&P_ID=358831.1Note: 167000.1 - Installation Guide for E-Business Suite Diagnostics
https://metalink2.Oracle.com/MetaLink/PLSQL/ml2_documents.showDocument?p_database_id=not&P_ID=167000.1For the Group of hotfixes in OAM, follow the steps described in the following note:
Note: 550654.1 - how to get patches Oracle Applications products in R12 group level
https://metalink2.Oracle.com/MetaLink/PLSQL/ml2_documents.showDocument?p_database_id=not&P_ID=550654.1Kind regards
Hussein -
What method is good for the database upgrade of small size
want to upgrade the database 10g to 11g.
Database 10g running on server1 production wants to upgrade to another Server2 Server 11 g
where's the good.
-> > > > 1 upgrade can migrate 2 different server.
-> > > > > 2 migrate then upgrade the database.
-> > > > another question
If am install 11g on server2 and 10g migration directly then there will be future problems on future production.
Instead of 11.2.0.1, I would say that you use 11.2.0.4. For a 5G database, import/export is probably the best option
HTH
Srini -
Should I wait until the end of the execution time of the query for the execution plan?
Hello Experts,
I want to see the execution plan of the query below. However, it takes more than 3 hours. Should I wait all the time to see the execution plan?
Note: EXPLAIN PLAN for does not work. (I mean that I do not see the actual line number, etc. with EXPLAIN the PLAN of market)
You can see the output of the execution plan when I canceled the execution after 1 minute.
My first question is: what should I do to see the execution plan for queries running out of time time?
2nd question: when I cancel the query during execution in order to see the execution plan, will I see the specific plan of execution or erroneous values? Because the first execution plan seems inaccurate, what do you think?
question 3: why EXPLAIN the PLAN for the clause does not work? Also, should I use EXPLAIN the PLAN of the clause to this scenerio? Can I see the result of running for long time without her queries?
Thnaks for your help.
Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE Production 11.2.0.2.0
AMT for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
Select / * + GATHER_PLAN_STATISTICS NO_PARALLEL * / J.INVOICEACCOUNT, J.INVOICEID, J.INVOICEDATE, (T.LINEAMOUNT + T.LINEAMOUNTTAX) price
of custinvoicejour j join custinvoicetrans t on
substr (nls_lower (j.DataAreaId), 1, 7) = substr (nls_lower (t.dataareaid), 1, 7) and
substr (nls_lower (J.INVOICEID), 1: 25) = substr (nls_lower (t.INVOICEID), 1: 25)
where
substr (nls_lower (T.DATAAREAID), 1, 7) = '201' and T.AVBROCHURELINENUM = 29457
and substr (nls_lower (j.dataareaid), 1, 7) = '201' and
J.INVOICEACCOUNT in
(select IT. Drmpos.avtr_seg_cust_campend ACCOUNTNUM this where THIS. CAMPAIGN = '201406' and THIS. SEGMENT_LEVEL in (', 'E'))
and J.AVAWARDSALES > 190
and substr (nls_lower (J.AVBILLINGCAMPAIGN), 1, 13) = '201406'
"and J.INVOICEDATE between ' 04.06.2014' and ' 13.06.2014 ';
SQL_ID, dznya6x7st0t8, number of children 0
-------------------------------------
Select / * + GATHER_PLAN_STATISTICS NO_PARALLEL * / J.INVOICEACCOUNT,.
J.INVOICEID, J.INVOICEDATE, (T.LINEAMOUNT + T.LINEAMOUNTTAX) price of
CustInvoiceJour j join custinvoicetrans t on
substr (nls_lower (j.DataAreaId), 1, 7) =
substr (nls_lower (t.DataAreaId), 1, 7) and
= substr (nls_lower (J.INVOICEID), 1: 25)
substr (nls_lower (t.INVOICEID), 1: 25) where
substr (nls_lower (T.DATAAREAID), 1, 7) = '201' and T.AVBROCHURELINENUM =
29457 and substr (nls_lower, (j.dataareaid), 1, 7) = '201' and
J.INVOICEACCOUNT in (select CE. ACCOUNTNUM of
drmpos.avtr_seg_cust_campend this where THIS. CAMPAIGN = '201406' and
IT. SEGMENT_LEVEL in (', 'E')) and J.AVAWARDSALES > 190 and
substr (nls_lower (J.AVBILLINGCAMPAIGN), 1, 13) = '201406' and
"J.INVOICEDATE between ' 04.06.2014' and ' 13.06.2014 '.
Hash value of plan: 2002317666
--------------------------------------------------------------------------------------------------------------------------------------------------------------
| ID | Operation | Name | Begins | E - lines. A - lines. A - time | Pads | Bed | OMem | 1Mem | Used Mem.
--------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1. | 0 | 00:00:00.01 | 0 | 0 | | | |
|* 1 | HASH JOIN | | 1. 3956. 0 | 00:00:00.01 | 0 | 0 | 2254K | 1061K | 2190K (0) |
|* 2 | HASH JOIN | | 1. 87. 16676. 00:00:01.64 | 227K | 3552. 3109K | 1106K | 4111K (0) |
|* 3 | TABLE ACCESS BY INDEX ROWID | CUSTINVOICEJOUR | 1. 1155 | 31889 | 00:00:01.16 | 223KO | 15. | | |
|* 4 | INDEX RANGE SCAN | I_062INVOICEDATEORDERTYPEIDX | 1. 4943 | 134K | 00:00:00.83 | 45440 | 0 | | | |
| 5. SIMPLE LIST OF PARTITION. | 1. 82360 | 173K | 00:00:00.08 | 3809 | 3537 | | | |
|* 6 | TABLE ACCESS FULL | AVTR_SEG_CUST_CAMPEND | 1. 82360 | 173K | 00:00:00.06 | 3809 | 3537 | | | |
| 7. TABLE ACCESS BY INDEX ROWID | CUSTINVOICETRANS | 1. 4560 | 0 | 00:00:00.01 | 0 | 0 | | | |
|* 8 | INDEX RANGE SCAN | I_064INVLINENUMCAMPAIGNOFPRICE | 1. 4560 | 0 | 00:00:00.01 | 0 | 0 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
1 - access("J".") "SYS_NC00299$"="T". "' SYS_NC00165$ ' AND SUBSTR (NLS_LOWER ('J'. "" "" REFFACTURE")(, 1, 25) = SUBSTR (NLS_LOWER ("T"." "" "REFFACTURE")(, 1, 25)).
2 - access("J".") INVOICEACCOUNT '= SYS_OP_C2C ("EC". ". ACCOUNTNUM'))
3 - filter("J".") AVAWARDSALES"> 190)
4 - access("J".") SYS_NC00299$ "= U ' 201"AND "J". INVOICEDATE"> = TO_DATE(' 2014-06-04 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"J"." SYS_NC00307$ "= U ' 201406"AND "J". INVOICEDATE"< = TO_DATE (' 2014-06-13 00:00:00 ',' syyyy-mm-dd hh24:mi:ss')))
filter ((' J'. "INVOICEDATE' > = 'J' AND TO_DATE(' 2014-06-04 00:00:00', 'syyyy-mm-dd hh24:mi:ss') '." " SYS_NC00307$ "= U '201406' AND"
"J"." INVOICEDATE"< = TO_DATE (' 2014-06-13 00:00:00 ',' syyyy-mm-dd hh24:mi:ss'))))
6 filter (("CE". "SEGMENT_LEVEL" = A "OR"THIS"." SEGMENT_LEVEL "=" E"))
8 - access("T".") SYS_NC00165$ "= U ' 201"AND "T". AVBROCHURELINENUM "= 29457)
filter ("T". ("AVBROCHURELINENUM" = 29457)
EXPLAIN PLAN FOR
Select / * + GATHER_PLAN_STATISTICS NO_PARALLEL * / J.INVOICEACCOUNT, J.INVOICEID, J.INVOICEDATE, (T.LINEAMOUNT + T.LINEAMOUNTTAX) price
of custinvoicejour j join custinvoicetrans t on
substr (nls_lower (j.DataAreaId), 1, 7) = substr (nls_lower (t.dataareaid), 1, 7) and
substr (nls_lower (J.INVOICEID), 1: 25) = substr (nls_lower (t.INVOICEID), 1: 25)
where
substr (nls_lower (T.DATAAREAID), 1, 7) = '201' and T.AVBROCHURELINENUM = 29457
and substr (nls_lower (j.dataareaid), 1, 7) = '201' and
J.INVOICEACCOUNT in
(select IT. Drmpos.avtr_seg_cust_campend ACCOUNTNUM this where THIS. CAMPAIGN = '201406' and THIS. SEGMENT_LEVEL in (', 'E'))
and J.AVAWARDSALES > 190
and substr (nls_lower (J.AVBILLINGCAMPAIGN), 1, 13) = '201406'
"and J.INVOICEDATE between ' 04.06.2014' and ' 13.06.2014 ';
SELECT * FROM table (DBMS_XPLAN. DISPLAY_CURSOR);
SELECT * FROM table (DBMS_XPLAN. DISPLAY_CURSOR ('7h1nbzqjgwsp7', 2));
SQL_ID, 7h1nbzqjgwsp7, number of children 2
EXPLAIN PLAN for select / * + GATHER_PLAN_STATISTICS NO_PARALLEL * /.
J.INVOICEACCOUNT, J.INVOICEID, J.INVOICEDATE,
(T.LINEAMOUNT + T.LINEAMOUNTTAX) join price j custinvoicejour
CustInvoiceTrans t on substr (nls_lower (j.dataareaid), 1, 7) =
substr (nls_lower (t.DataAreaId), 1, 7) and
= substr (nls_lower (J.INVOICEID), 1: 25)
substr (nls_lower (t.INVOICEID), 1: 25) where
substr (nls_lower (T.DATAAREAID), 1, 7) = '201' and T.AVBROCHURELINENUM =
29457 and substr (nls_lower, (j.dataareaid), 1, 7) = '201' and
J.INVOICEACCOUNT in (select CE. ACCOUNTNUM of
drmpos.avtr_seg_cust_campend this where THIS. CAMPAIGN = '201406' and
IT. SEGMENT_LEVEL in (', 'E')) and J.AVAWARDSALES > 190 and
substr (nls_lower (J.AVBILLINGCAMPAIGN), 1, 13) = '201406' and
"J.INVOICEDATE between ' 04.06.2014' and ' 13.06.2014 '.
NOTE: cannot fetch SQL_ID plan: 7h1nbzqjgwsp7, CHILD_NUMBER: 2
Check the value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in the cursor cache (check v$ sql_plan)
NightWing wrote:
Randolf,
I don't understand. What you hear from the above statement that you mean A-lines and E will be incorrect, but the ratio between them remain the same. Therefore, you can deduct the bad things by comparing the differences.
Thus, A-lines always give a wrong result for cancellation of queries, isn't it?
Charlie,
I think that Martin gave a good explanation. Here's another example that hopefully makes more obvious things:
17:56:55 SQL >-things go very wrong here with a small buffer cache
17:56:55 SQL >-T2 lines are badly scattered when you access through T1. FK
17:56:55 SQL >--
17:56:55 SQL >-"Small job" approach would have been a good idea
17:56:55 SQL >-if the estimate of 100 iterations of the loop was correct!
17:56:55 SQL > select
17:56:55 (t2.attr2) count 2
17:56:55 3 of
17:56:55 4 t1
17:56:55 5, t2
17:56:55 6 where
17:56:55 7 /*------------------*/
17:56:55 8 trunc (t1.attr1) = 1
17:56:55 9 and trunc (t1.attr2) = 1
17:56:55 10 / *-* /.
17:56:55 11 and t1.fk = t2.id
17:56:55 12.
T1
*
ERROR on line 4:
ORA-01013: user has requested the cancellation of the current operation
Elapsed time: 00:04:58.30
18:01:53 SQL >
18:01:53 SQL > @xplan_extended_display_cursor ' ' ' ' 'ALLSTATS LAST + COST.
18:01:53 SQL > set echo off verify off termout off
SQL_ID, 353msax56jvvp, number of children 0
-------------------------------------
SELECT count (t2.attr2) from t1, t2 where
/ / *-* trunc (t1.attr1) = 1 and
trunc (T1.attr2) = 1 / *-* / and t1.fk = t2.id
Hash value of plan: 2900488714
------------------------------------------------------------------------------------------------------------------------------------
| ID | The NEST | DSB | Operation | Name | Begins | E - lines. Cost (% CPU). A - lines. A - time | Pads | Bed |
------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 7. SELECT STATEMENT | | 1. | 4999 (100) | 0 | 00:00:00.01 | 0 | 0 |
| 1. 0 | 8 2 GLOBAL TRI | | 1. 1. | 0 | 00:00:00.01 | 0 | 0 |
| 2. 1. 5. NESTED LOOPS | | 1. | | 57516 | 00:04:58.26 | 173K | 30770 |
| 3. 2. 3. NESTED LOOPS | | 1. 100. 4999 (1) | 57516 | 00:00:21.06 | 116K | 3632.
|* 4 | 3. 1. TABLE ACCESS FULL | T1 | 1. 100. 4799 (1) | 57516 | 00:00:00.19 | 1008 | 1087 |
|* 5 | 3. 2. INDEX UNIQUE SCAN | T2_IDX | 57516 | 1. 1 (0) | 57516 | 00:00:20.82 | 115K | 2545 |
| 8 2 2 | 4. TABLE ACCESS BY INDEX ROWID | T2 | 57516 | 1. 2 (0) | 57516 | 00:04:37.14 | 57516 | 27138 |
------------------------------------------------------------------------------------------------------------------------------------
Information of predicates (identified by the operation identity card):
---------------------------------------------------
4 filter ((TRUNC ('T1'. "ATTR1") = 1 AND TRUNC ('T1'. " ATTR2') = 1))
5 - access("T1".") FK '= 'T2'.' (ID')
You say here that I canceled a query after about 5 minutes, and looking at the statistics of content (RowSource) I can already say the following:
1. the estimation of cardinality of T1 is far - the optimizer estimated 100 lines, but it actually generated more than 57000 lines when the query was cancelled. If this definitely seems like a candidate at the origin of the problems
2. the query has spent most of the time in search of random table T2
So while it is true that I don't know final A-lines of this cancelled query information, I can still say a lot of this and begin to deal with the problems identified so far.
Randolf
-
How to free up disk space for the database file
Hi experts BDB.
I use bdb4.6.21 transaction Btree access method, found that after vacuum db db-> truncate, the size of the db file has changed. How to free up disk file db space? It can be configured?
Thank you
Min
Hi Min,
What Mike says is that you do not expect that the empty space after an operation of truncation (for example a delete or update that empties a page) in the database (empty pages) to return to the file system and therefore to see the physical file size decreases.
The next section of the documentation explaining this: required disk space
Space released by removing the key/data pairs in a Btree database or Hash is never returned in the file system, even if it is reused when possible. This means that the databases Btree and hash are to develop alone. If enough key is deleted from a database which shrinks the underlying file is desirable, use the DB-> compact() method to recover disk space. Alternatively, you can create a new database and copy the folders of the former into it.
So, as Mike suggested, you can dump and reload the data into a new database, or copy data from the existing/old database to a new (to remove the old database and rename a new one the old name), or you can try to compact the database using the method compact().
If you use compact(), then in order to force the return of empty for the file system pages when possible, use the DB_FREE_SPACE flag and try to avoid using an explicit transaction (use a NULL tnxid pointer, so this BDB will use internally of several transactions that will be engaged periodically to avoid locking large sections of the tree).
When you use the compaction in order to free up space and return pages from database empty in the file system that it is generally recommended to repeat Compact with a value low 'compact_fillpercent '. In addition, the following output statistics fields in the structure DB_COMPACT, compact_pages_truncated and compact_pages_free should be examined to determine if there is a point, continuing to run the compaction with the same compact_fillpercent. If the values are strict positive it is then compact() calling again with the same compact_fillpercent (and specify the DB_FREE_SPACE flag). The compact algorithm allows a single pass on the pages of the database; pages so not empty at the end of the file will prevent the free pages (which are placed on the free list) to be returned to the file system.
Kind regards
Andrei
-
SQL query for the mapping of a set of prizes to a group of classrooms
Hi all
I use Oracle database 11g Release 2.
I have the following data set:
Classrooms
ClassId ClassName ability group
------ ---------------------------------------------- -------------- -----------
Babbage/software Engg Lab 1 24 1
Basement 2 - block PG 63 1
3 1 56 1 class
Class 4 1 24 10
Class 5 1 24 11
Class 6 1 35 12
7 13 42 1 class
8 14 42 1 class
9 15 42 1 class
10 2 35 1 class
11 3 35 1 class
12 4 35 1 classroom
13 5 35 1 class
14 6 25 1 class
15 7 25 1 class
16 1 24 8 class
17 9 24 1 class
18 control Sys Lab 1 24
19 dig & Embd Sys Lab 20 1
20 PSD & Comm 20 1 Lab
21 electromechanical system Lab 28 1
Farabi 22/Web Tech Lab 1 36
23 gen purpose Lab 40 1
Shirazi/24dB Tech Lab 1 36
ADV 25 elect Lab 30 2
26 16 42 2 class
27 17 49 2 class
28 18 56 2 class
29 19 42 2 class
30 20 49 2 class
Class 31 21 35 3
32 22 35 3 class
33 20 3 MDA lab
DegreeBatches
BatchId BatchName force
--------------- ----------------------- --------------
1 BIT - 11 79
2 BIT - 12 28
3 BS (CS)-1 35
4 BS (CS) 78-2
5 BE (SE)-1 69
6. BE (SE) 84-2
7 BE (SE) 64-3
8 84 BYTČA-7
9 43 BYTČA-8
BEE-1 10, 112
11 151 BEE-2
BEE-3 12, 157
13 BEE-4 157
I want to map a combination of batch of degree for a class rooms group of such distance that they make full use of the maximum capacity of the class rooms within a group (ideally), or as close to this as possible. Can it be done with a SQL query?
Any response will be appreciated.
The SQL Scripts to generate the required tables and populate data is less to:
CREATE TABLE classrooms (ClassId NUMBER, ClassName VARCHAR2 (50), capacity NUMBER, group NUMBER);
INSERT INTO the classrooms of the VALUES (1, "Babbage/software Engg Lab', 24, 1");
INSERT INTO the classrooms of the VALUES (2, 'basement - PG block', 63, 1);
INSERT INTO the classrooms of the VALUES (3, '1 class room', 56, 1);
INSERT INTO the classrooms of the VALUES (4, '10 class room', 24, 1);
INSERT INTO the classrooms of the VALUES (5, '11 class room', 24, 1);
INSERT INTO the classrooms of the VALUES (6, 'class room 12', 35, 1);
INSERT INTO the classrooms of the VALUES (7, 'class room 13', 42, 1);
INSERT INTO the classrooms of the VALUES (8, 'class room 14', 42, 1);
INSERT INTO the classrooms of the VALUES (9, '15 'class, 42, 1);
INSERT INTO the classrooms of the VALUES (10, 'class 2', 35, 1);
INSERT INTO the classrooms of the VALUES (11, 'class room 3', 35, 1);
INSERT INTO the classrooms of the VALUES (12, 'class room 4', 35, 1);
INSERT INTO the classrooms of the VALUES (13, 'class room 5', 35, 1);
INSERT INTO the classrooms of the VALUES (14, 'class room 6', 25, 1);
INSERT INTO the classrooms of the VALUES (15, '7 class room', 25, 1);
INSERT INTO the classrooms of the VALUES (16, 'class Room 8', 24, 1);
INSERT INTO the classrooms of the VALUES (17, 'class room 9', 24, 1);
INSERT INTO the classrooms of the VALUES (18, 'Control Sys Lab', 24, 1);
INSERT INTO the classrooms of the VALUES (19, 'Dig & Embd Sys Lab', 20, 1);
INSERT INTO the classrooms of the VALUES (20, 'DSP & Comm Lab', 20, 1);
INSERT INTO the classrooms of the VALUES (21, 'system ELECTROMECHANICAL Lab', 28, 1);
INSERT INTO the classrooms of the VALUES (22, ' Farabi/Web Tech Lab', 36, 1);
INSERT INTO the classrooms of the VALUES (23, 'Gen purpose Lab', 40, 1);
INSERT INTO the classrooms of the VALUES (24, ' Shirazi/DB Tech Lab', 36, 1);
INSERT INTO the classrooms of the VALUES (25, 'Elected Adv Lab', 30, 2);
INSERT INTO the classrooms of the VALUES (26, 'class room 16', 42, 2);
INSERT INTO the classrooms of the VALUES (27, 'class room 17', 49, 2);
INSERT INTO the classrooms of the VALUES (28, '18 'class, 56, 2);
INSERT INTO the classrooms of the VALUES (29, '19 'class, 42, 2);
INSERT INTO the classrooms of the VALUES (30, 'class room 20', 49, 2);
INSERT INTO the classrooms of the VALUES (31, 'class room 21', 35, 3);
INSERT INTO the classrooms of the VALUES (32, 'room 22', 35, 3);
INSERT INTO the classrooms of the VALUES (33, 'MDA Lab', 20, 3);
CREATE TABLE DegreeBatches (BatchId NUMBER, BatchName VARCHAR2 (50), membership NUMBER);
INSERT INTO DegreeBatches VALUES(1,'BIT-11',79);
INSERT INTO DegreeBatches VALUES(2,'BIT-12',28);
INSERT INTO DegreeBatches VALUES (3, 'BS (CS) - 1', 35);
INSERT INTO DegreeBatches VALUES (4, 'BS (CS) - 2', 78);
INSERT INTO DegreeBatches VALUES (5,'BE (SE) - 1', 69);
INSERT INTO DegreeBatches VALUES (6,'BE (SE) - 2', 84);
INSERT INTO DegreeBatches VALUES (7,'BE (SE) - 3', 64);
INSERT INTO DegreeBatches VALUES(8,'BICSE-7',84);
INSERT INTO DegreeBatches VALUES(9,'BICSE-8',43);
INSERT INTO DegreeBatches VALUES(10,'BEE-1',112);
INSERT INTO DegreeBatches VALUES(11,'BEE-2',151);
INSERT INTO DegreeBatches VALUES(12,'BEE-3',157);
INSERT INTO DegreeBatches VALUES(13,'BEE-4',157);
Best regards
Bilal
Published by: Bilal on December 27, 2012 09:52
Published by: Bilal on December 27, 2012 10:07Bilal, thanks for the nice problem! Another possibility to double check is to write a small PL/SQL function that returns 1 if a duplicate id is found, then equate to 0: "NUMBER of RETURN of Duplicate_Token_Found (p_str_main in VARCHAR2, p_str_trial VARCHAR2). It should analyze the second string and could use p_str_main LIKE '%', | l_id | ', %' for each id. In any case, the query complete (without that) is given below:
Solution with names SQL> WITH rsf_itm (con_id, max_weight, nxt_id, lev, tot_weight, tot_profit, path, root_id, lev_1_id) AS ( 2 SELECT c.id, 3 c.max_weight, 4 i.id, 5 0, 6 i.item_weight, 7 i.item_profit, 8 ',' || i.id || ',', 9 i.id, 10 0 11 FROM items i 12 CROSS JOIN containers c 13 UNION ALL 14 SELECT r.con_id, 15 r.max_weight, 16 i.id, 17 r.lev + 1, 18 r.tot_weight + i.item_weight, 19 r.tot_profit + i.item_profit, 20 r.path || i.id || ',', 21 r.root_id, 22 CASE WHEN r.lev = 0 THEN i.id ELSE r.nxt_id END 23 FROM rsf_itm r 24 JOIN items i 25 ON i.id > r.nxt_id 26 AND r.tot_weight + i.item_weight <= r.max_weight 27 ORDER BY 1, 2 28 ) SEARCH DEPTH FIRST BY nxt_id SET line_no 29 , rsf_con (nxt_con_id, nxt_line_no, con_path, itm_path, tot_weight, tot_profit, lev) AS ( 30 SELECT con_id, 31 line_no, 32 To_Char(con_id), 33 ':' || con_id || '-' || (lev + 1) || ':' || path, 34 tot_weight, 35 tot_profit, 36 0 37 FROM rsf_itm 38 UNION ALL 39 SELECT r_i.con_id, 40 r_i.line_no, 41 r_c.con_path || ',' || r_i.con_id, 42 r_c.itm_path || ':' || r_i.con_id || '-' || (r_i.lev + 1) || ':' || r_i.path, 43 r_c.tot_weight + r_i.tot_weight, 44 r_c.tot_profit + r_i.tot_profit, 45 r_c.lev + 1 46 FROM rsf_con r_c 47 JOIN rsf_itm r_i 48 ON r_i.con_id > r_c.nxt_con_id 49 WHERE r_c.itm_path NOT LIKE '%,' || r_i.root_id || ',%' 50 AND r_c.itm_path NOT LIKE '%,' || r_i.lev_1_id || ',%' 51 AND r_c.itm_path NOT LIKE '%,' || r_i.nxt_id || ',%' 52 ) 53 , paths_ranked AS ( 54 SELECT itm_path || ':' itm_path, tot_weight, tot_profit, lev + 1 n_cons, 55 Rank () OVER (ORDER BY tot_profit DESC) rnk, 56 Row_Number () OVER (ORDER BY tot_profit DESC) sol_id 57 FROM rsf_con 58 ), best_paths AS ( 59 SELECT itm_path, tot_weight, tot_profit, n_cons, sol_id 60 FROM paths_ranked 61 WHERE rnk = 1 62 ), row_gen AS ( 63 SELECT LEVEL lev 64 FROM DUAL 65 CONNECT BY LEVEL <= (SELECT Count(*) FROM items) 66 ), con_v AS ( 67 SELECT b.itm_path, r.lev con_ind, b.sol_id, b.tot_weight, b.tot_profit, 68 Substr (b.itm_path, Instr (b.itm_path, ':', 1, 2*r.lev - 1) + 1, 69 Instr (b.itm_path, ':', 1, 2*r.lev) - Instr (b.itm_path, ':', 1, 2*r.lev - 1) - 1) 70 con_nit_id, 71 Substr (b.itm_path, Instr (b.itm_path, ':', 1, 2*r.lev) + 1, 72 Instr (b.itm_path, ':', 1, 2*r.lev + 1) - Instr (b.itm_path, ':', 1, 2*r.lev) - 1) 73 itm_str 74 FROM best_paths b 75 JOIN row_gen r 76 ON r.lev <= b.n_cons 77 ), con_split AS ( 78 SELECT itm_path, con_ind, sol_id, tot_weight, tot_profit, 79 Substr (con_nit_id, 1, Instr (con_nit_id, '-', 1) - 1) con_id, 80 Substr (con_nit_id, Instr (con_nit_id, '-', 1) + 1) n_items, 81 itm_str 82 FROM con_v 83 ), itm_v AS ( 84 SELECT c.itm_path, c.con_ind, c.sol_id, c.con_id, c.tot_weight, c.tot_profit, 85 Substr (c.itm_str, Instr (c.itm_str, ',', 1, r.lev) + 1, 86 Instr (c.itm_str, ',', 1, r.lev + 1) - Instr (c.itm_str, ',', 1, r.lev) - 1) 87 itm_id 88 FROM con_split c 89 JOIN row_gen r 90 ON r.lev <= c.n_items 91 ) 92 SELECT v.sol_id, 93 v.tot_weight s_wt, v.tot_profit s_pr, c.id c_id, c.name c_name, c.max_weight m_wt, 94 Sum (i.item_weight) OVER (PARTITION BY v.sol_id, c.id) c_wt, 95 i.id i_id, i.name i_name, i.item_weight i_wt, i.item_profit i_pr 96 FROM itm_v v 97 JOIN containers c 98 ON c.id = To_Number (v.con_id) 99 JOIN items i 100 ON i.id = To_Number (v.itm_id) 101 ORDER BY sol_id, con_id, itm_id 102 / SOL_ID S_WT S_PR C_ID C_NAME M_WT C_WT I_ID I_NAME I_WT I_PR ---------- ---- ---- ----- --------------- ---- ---- ----- ---------- ---- ---- 1 255 255 1 SEECS UG Block 100 100 1 BIT-10 35 35 2 BIT-11 40 40 6 BICSE-7 25 25 2 IAEC Building 70 70 4 BSCS-3 40 40 7 BESE-3 30 30 3 RIMMS Building 90 85 3 BSCS-2 35 35 5 BEE-4 50 50 2 255 255 1 SEECS UG Block 100 95 4 BSCS-3 40 40 6 BICSE-7 25 25 7 BESE-3 30 30 2 IAEC Building 70 70 1 BIT-10 35 35 3 BSCS-2 35 35 3 RIMMS Building 90 90 2 BIT-11 40 40 5 BEE-4 50 50 3 255 255 1 SEECS UG Block 100 100 3 BSCS-2 35 35 4 BSCS-3 40 40 6 BICSE-7 25 25 2 IAEC Building 70 65 1 BIT-10 35 35 7 BESE-3 30 30 3 RIMMS Building 90 90 2 BIT-11 40 40 5 BEE-4 50 50 4 255 255 1 SEECS UG Block 100 100 3 BSCS-2 35 35 4 BSCS-3 40 40 6 BICSE-7 25 25 2 IAEC Building 70 70 2 BIT-11 40 40 7 BESE-3 30 30 3 RIMMS Building 90 85 1 BIT-10 35 35 5 BEE-4 50 50 5 255 255 1 SEECS UG Block 100 95 2 BIT-11 40 40 6 BICSE-7 25 25 7 BESE-3 30 30 2 IAEC Building 70 70 1 BIT-10 35 35 3 BSCS-2 35 35 3 RIMMS Building 90 90 4 BSCS-3 40 40 5 BEE-4 50 50 6 255 255 1 SEECS UG Block 100 100 2 BIT-11 40 40 3 BSCS-2 35 35 6 BICSE-7 25 25 2 IAEC Building 70 65 1 BIT-10 35 35 7 BESE-3 30 30 3 RIMMS Building 90 90 4 BSCS-3 40 40 5 BEE-4 50 50 7 255 255 1 SEECS UG Block 100 100 2 BIT-11 40 40 3 BSCS-2 35 35 6 BICSE-7 25 25 2 IAEC Building 70 70 4 BSCS-3 40 40 7 BESE-3 30 30 3 RIMMS Building 90 85 1 BIT-10 35 35 5 BEE-4 50 50 8 255 255 1 SEECS UG Block 100 100 1 BIT-10 35 35 4 BSCS-3 40 40 6 BICSE-7 25 25 2 IAEC Building 70 70 2 BIT-11 40 40 7 BESE-3 30 30 3 RIMMS Building 90 85 3 BSCS-2 35 35 5 BEE-4 50 50 9 255 255 1 SEECS UG Block 100 100 1 BIT-10 35 35 4 BSCS-3 40 40 6 BICSE-7 25 25 2 IAEC Building 70 65 3 BSCS-2 35 35 7 BESE-3 30 30 3 RIMMS Building 90 90 2 BIT-11 40 40 5 BEE-4 50 50 10 255 255 1 SEECS UG Block 100 100 1 BIT-10 35 35 3 BSCS-2 35 35 7 BESE-3 30 30 2 IAEC Building 70 65 2 BIT-11 40 40 6 BICSE-7 25 25 3 RIMMS Building 90 90 4 BSCS-3 40 40 5 BEE-4 50 50 11 255 255 1 SEECS UG Block 100 100 1 BIT-10 35 35 3 BSCS-2 35 35 7 BESE-3 30 30 2 IAEC Building 70 65 4 BSCS-3 40 40 6 BICSE-7 25 25 3 RIMMS Building 90 90 2 BIT-11 40 40 5 BEE-4 50 50 12 255 255 1 SEECS UG Block 100 95 1 BIT-10 35 35 3 BSCS-2 35 35 6 BICSE-7 25 25 2 IAEC Building 70 70 2 BIT-11 40 40 7 BESE-3 30 30 3 RIMMS Building 90 90 4 BSCS-3 40 40 5 BEE-4 50 50 13 255 255 1 SEECS UG Block 100 95 1 BIT-10 35 35 3 BSCS-2 35 35 6 BICSE-7 25 25 2 IAEC Building 70 70 4 BSCS-3 40 40 7 BESE-3 30 30 3 RIMMS Building 90 90 2 BIT-11 40 40 5 BEE-4 50 50 14 255 255 1 SEECS UG Block 100 100 1 BIT-10 35 35 2 BIT-11 40 40 6 BICSE-7 25 25 2 IAEC Building 70 65 3 BSCS-2 35 35 7 BESE-3 30 30 3 RIMMS Building 90 90 4 BSCS-3 40 40 5 BEE-4 50 50 98 rows selected. Elapsed: 00:00:01.42
Published by: BrendanP on January 20, 2013 11:25
I found the need to deduplicate regular expression:AND RegExp_Instr (r_c.itm_path | r_i.path, ',(\d+),.*?,\1,') = 0)
-
"Backup Optimization" does not work for the "database backup"
Hello
I use a windows environment and my info from database is like this:
now I have to change my configuration for backup optimization on and then make example tablespace offline and do file sample data in offline mode. After that, I run 2 times 'Backup Database' command but 2 backup has the same size and have example tablespace too... Any backup archivelog is work bur backup database is not!Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for 64-bit Windows: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
According to this link, it does not backup datafile for the second example: http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmconfb.htm#BRADV113
is there something I missed?
configuration of RMAN:
CONFIGURE RETENTION POLICY TO REDUNDANCY 5; CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'HIGH' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE; CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\APP\ABC\PRODUCT\11.2.0\DBHOME_1\DATABASE\SNCFORCL.ORA'; # default
No failure, it works an expected. There is a relationship between the retention policy and the number of backups:
RMAN only ignore backups of offline or read-only data files when there are backups n + 1, where n is the redundancy. You have of the "REDUNDANCY 5', and you will keep 6 identical backups of the offline data file.
Werner
-
Name of the table to query for the time window of work
I am trying to build a query for a list of jobs in tide. Anyone know what table is for the time window? Please notify. Thank you.
Hi Warren, according to me, this is the jobdtl table.
jobdtl_fromtm and jobdtl_untiltm
HISTORY_TABLE
ITEM_CODE | BAT_NO |
TXN_CODE
DOC_NO
UPDT_DT
I1
B1
T1
1234
JANUARY 3, 2015
I1
B20
T20
4567
MARCH 3, 2015
I1
B30
T30
7890
FEBRUARY 5, 2015
I2
B40
T20
1234
JANUARY 1, 2015
TRANSACTION
TXN_CODE | TXN_TYPE |
T1 | IN |
T20 |
OFF
T30
ALL THE
T50
IN
T80
IN
T90
IN
T60
ALL THE
T70
ALL THE
T40
ALL THE
IN_TABLE_HEAD_1
H1_SYS_ID (primary key) | TXN_CODE | DOC_NO |
DOC_DATE
H1ID1
T1
1234
JANUARY 1, 2015
H1ID2
T70
1234
FEBRUARY 1, 2015
IN_TABLE_ITEM_1
I1_SYS_ID |
H1_SYS_ID
(foreign key referencing H1_SYS_ID in IN_TABLE_HEAD_1)
ITEM_CODE
I1ID1
H1ID1
I1
I1ID2
H1ID1
I100
I1ID3
H1ID2
I3
IN_TABLE_BATCH_1
B1_SYS_ID | TXN_CODE DOC_NO (now in IN_TABLE_HEAD_1) | BAT_NO |
B1ID1
T1
1234
B1 / can be empty
B1ID2
T70
1234
B70
IN_TABLE_HEAD_2
H2_SYS_ID (primary key) | TXN_CODE |
DOC_NO
DOC_DATE
H2ID1
T30
4567
FEBRUARY 3, 2015
H2ID2
T60
1234
JANUARY 3, 2015
IN_TABLE_ITEM_2
I2_SYS_ID | H2_SYS_ID (foreign key referencing H2_SYS_ID in IN_TABLE_HEAD_2) | ITEM_CODE |
I2ID1 | H2ID1 |
I1
I2ID2
H2ID1
I200
I2ID3
H2ID2
I2
IN_TABLE_BATCH_2
B2_SYS_ID |
I2_SYS_ID
(foreign key referencing I2_SYS_ID in IN_TABLE_ITEM_2)
BAT_NO
B2ID1
I2ID1
B30 / null
B2ID2
I2ID2
B90
B2ID2
I2ID3
B60
IN_TABLE_HEAD_3
H3_SYS_ID (primary key) | TXN_CODE | DOC_NO | DOC_DATE |
H3ID1 |
T50
1234
JANUARY 2, 2015
H3ID2
T80
1234
JANUARY 3, 2015
H3ID3
T90
1234
JANUARY 4, 2015
H3ID4
T40
1234
AUGUST 5, 2015
IN_TABLE_ITEM_3
I3_SYS_ID |
H3_SYS_ID
(foreign key referencing H3_SYS_ID in IN_TABLE_HEAD_3)
ITEM_CODE
BAT_NO
I3ID1
H31D1
I2
B50
I3ID2
H3ID2
I4
B40
I3ID3
H3ID3
I4
I3ID4
H3ID4
I6
There is no IN_TABLE_BATCH_3
Please find below the expected results.
OUTPUT
ITEM_CODE | BAT_NO | TXN_CODE | DOC_NO |
BOE_DT
BATCH_YN
I1
B1
T1
1234
JANUARY 3, 2015
THERE
I1
B30
T30
7890
FEBRUARY 5, 2015
N
I2
B60
T60
1234
JANUARY 3, 2015
N
I3
B70
T70
1234
FEBRUARY 1, 2015
THERE
I4
T90
1234
JANUARY 4, 2015
N
I6
T40
1234
AUGUST 5, 2015
N
Controls database to create the tables above and insert the records.
CREATE TABLE stock_table()item_code VARCHAR2()80),bat_no VARCHAR2()80),txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), boe_dt DATE );
INSERT EN stock_table
VALUES ('I1', 'B1', '', '', '');
INSERT EN stock_table
VALUES ('I1', '', '', '', '');
INSERT IN stock_table
VALUES ('I2', '', '', '', '');
INSERT EN stock_table
VALUES ('I3', 'B70', '', '', '');
INSERT EN stock_table
VALUES ('I4', 'B80', '', '', '');
INSERT EN stock_table
VALUES ('I5', 'B90', 'T102', '1234', '02-JUL-2015');
INSERT EN stock_table
VALUES ('I6', 'B100', '', '', '');
SELECT *
FROM stock_table
CREATE TABLE history_table()item_code VARCHAR2()80),bat_no VARCHAR2()80),txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), updt_dt DATE );
INSERT IN history_table
VALUES ('I1', 'B1', 'T1', '1234', '03-JAN-2015');
INSERT IN history_table
VALUES ('I1', 'B20', 'T20', '4567', '03-MAR-2015');
INSERT IN history_table
VALUES ('I1', 'B30', 'T30', '7890', '05-FEB-2015');
INSERT IN history_table
VALUES ('I2', 'B40', 'T20', '1234', '01-JAN-2015');
SELECT *
FROM history_table
CREATE TABLE transaction1()txn_code VARCHAR()80),txn_type VARCHAR()80));
INSERT INTO transaction1
VALUES ('T1', 'IN');
INSERT INTO transaction1
VALUES ('T20', 'OUT');
INSERT INTO transaction1
VALUES ('T30', 'ALL');
INSERT INTO transaction1
VALUES ('T40', 'ALL');
INSERT INTO transaction1
VALUES ('T50', 'IN');
INSERT INTO transaction1
VALUES ('T60', 'ALL');
INSERT INTO transaction1
VALUES ('T70', 'ALL');
INSERT INTO transaction1
VALUES ('T80', 'IN');
INSERT INTO transaction1
VALUES ('T90', 'IN');
SELECT *
FROM transaction1
CREATE TABLE in_table_head_1()h1_sys_id VARCHAR2()80) PRIMARY KEY,txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), doc_dt DATE );
CREATE TABLE in_table_head_2()h2_sys_id VARCHAR2()80) PRIMARY KEY,txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), doc_dt DATE );
CREATE TABLE in_table_head_3()h3_sys_id VARCHAR2()80) PRIMARY KEY,txn_code VARCHAR2()80),
doc_no VARCHAR2 (80), doc_dt DATE );
INSERT IN in_table_head_1
VALUES ('H1ID1', 'T1', '1234', '01-JAN-2015');
INSERT IN in_table_head_1
VALUES ('H1ID2', 'T70', '1234', '01-FEB-2015');
INSERT IN in_table_head_2
VALUES ('H2ID1', 'T30', '4567', '03-FEB-2015');
INSERT IN in_table_head_2
VALUES ('H2ID2', 'T60', '1234', '03-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID1', 'T50', '1234', '02-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID2', 'T80', '1234', '03-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID3', 'T90', '1234', '05-JAN-2015');
INSERT IN in_table_head_3
VALUES ('H3ID4', 'T40', '1234', '05-AUG-2015');
CREATE TABLE in_table_item_1()i1_sys_id VARCHAR2()80) PRIMARY KEY,
h1_sys_id VARCHAR2 (80) REFERENCES in_table_head_1()h1_sys_id),item_code VARCHAR2()80));
CREATE TABLE in_table_item_2()i2_sys_id VARCHAR2()80) PRIMARY KEY,
h2_sys_id VARCHAR2 (80) REFERENCES in_table_head_2()h2_sys_id),item_code VARCHAR2()80));
CREATE TABLE in_table_item_3(i3_sys_id VARCHAR2(80) PRIMARY KEY,
h3_sys_id VARCHAR2 (80) REFERENCES in_table_head_3()h3_sys_id),item_code VARCHAR2()80),
bat_no VARCHAR2 (80));
INSERT IN in_table_item_1
VALUES ('I1ID1', 'H1ID1', 'I1');
INSERT IN in_table_item_1
VALUES ('I1ID2', 'H1ID1', 'I100');
INSERT IN in_table_item_1
VALUES ('I1ID3', 'H1ID2', 'I3');
INSERT IN in_table_item_2
VALUES ('I2ID1', 'H2ID1', 'I1');
INSERT IN in_table_item_2
VALUES ('I2ID2', 'H2ID1', 'I200');
INSERT IN in_table_item_2
VALUES ('I2ID3', 'H2ID2', 'I2');
INSERT IN in_table_item_3
VALUES ('I3ID1', 'H3ID1', 'I2','B50');
INSERT IN in_table_item_3
VALUES ('I3ID2', 'H3ID2', 'I4','B40');
INSERT IN in_table_item_3
VALUES ('I3ID3', 'H3ID3', 'I4','');
INSERT IN in_table_item_3
VALUES ('I3ID4', 'H3ID4', 'I6','');
SELECT *
FROM in_table_item_1
SELECT *
FROM in_table_item_2
SELECT *
FROM in_table_item_3
CREATE TABLE in_table_batch_1()b1_sys_id VARCHAR2()80) PRIMARY KEY,
txn_code VARCHAR2 (80), doc_no VARCHAR2 (80), bat_no VARCHAR2 (80));
CREATE TABLE in_table_batch_2()b2_sys_id VARCHAR2()80) PRIMARY KEY,
i2_sys_id VARCHAR2 (80) REFERENCES in_table_item_2()i2_sys_id),bat_no VARCHAR2()80));
INSERT IN in_table_batch_1
VALUES ('B1ID1', 'T1', '1234', 'B1');
INSERT IN in_table_batch_1
VALUES ('B1ID2', 'T70', '1234', 'B70');
INSERT IN in_table_batch_2
VALUES ('B2ID1', 'I2ID1', 'B30');
INSERT IN in_table_batch_2
VALUES ('B2ID2', 'I2ID2', 'B90');
INSERT IN in_table_batch_2
VALUES ('B2ID3', 'I2ID3', 'B60');
Please advise a solution for the same.
Thank you and best regards,
Séverine Suresh
very forced (question subfactoring used to allow easy testing/verification - could work with these test data only)
with
case_1 as
(select s.item_code,
s.bat_no,
h.txn_code,
h.doc_no,
h.updt_dt boe_dt,
cases where s.bat_no = h.bat_no then 'Y' else ' n end batch_yn.
cases where h.txn_code is not null
and h.doc_no is not null
and h.updt_dt is not null
then 'case 1' '.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, boe_dt
of w_stock_table
where bat_no is null
or txn_code is null
or doc_no is null
or boe_dt is null
) s
left outer join
w_history_table h
On s.item_code = h.item_code
and s.bat_no = h.bat_no
and exists (select null
of w_transaction1
where txn_code = nvl (s.txn_code, h.txn_code)
and txn_type in ('IN', 'ALL')
)
),
case_2 as
(select s.item_code,
NVL (s.bat_no, h.bat_no) bat_no.
NVL (s.txn_code, h.txn_code) txn_code.
NVL (s.doc_no, h.doc_no) doc_no.
NVL (s.boe_dt, h.updt_dt) updt_dt.
cases where s.bat_no = h.bat_no then 'Y' else ' n end batch_yn.
cases where h.txn_code is not null
and h.doc_no is not null
and h.updt_dt is not null
then 'case 2'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, boe_dt
of case_1
where refers_to is null
) s
left outer join
w_history_table h
On s.item_code = h.item_code
and exists (select null
of w_transaction1
where txn_code = nvl (s.txn_code, h.txn_code)
and txn_type in ('IN', 'ALL')
)
and not exists (select null
of case_1
where item_code = h.item_code
and bat_no = h.bat_no
and txn_code = h.txn_code
and doc_no = h.doc_no
and updt_dt = h.updt_dt
)
),
case_31 as
(select s1.item_code,
NVL (S1.bat_no, W1.bat_no) bat_no.
NVL (S1.txn_code, W1.txn_code) txn_code.
NVL (S1.doc_no, W1.doc_no) doc_no.
NVL (S1.updt_dt, W1.doc_dt) updt_dt.
cases where s1.bat_no = w1.bat_no then 'Y' else ' n end batch_yn.
cases where w1.txn_code is not null
and w1.doc_no is not null
and w1.doc_dt is not null
then "case 31'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn, refers_to
of case_2
where refers_to is null
) s1
left outer join
(select i1.item_code, h1.txn_code, h1.doc_no, h1.doc_dt, b1.bat_no
of w_in_table_item_1 i1
inner join
w_in_table_head_1 h1
On i1.h1_sys_id = h1.h1_sys_id
inner join
w_in_table_batch_1 b1
On h1.txn_code = b1.txn_code
and h1.doc_no = b1.doc_no
) w1
On s1.item_code = w1.item_code
),
case_32 as
(select s2.item_code,
NVL (S2.bat_no, W2.bat_no) bat_no.
NVL (S2.txn_code, W2.txn_code) txn_code.
NVL (S2.doc_no, W2.doc_no) doc_no.
NVL (S2.updt_dt, W2.doc_dt) updt_dt.
cases where s2.bat_no = w2.bat_no then 'Y' else ' n end batch_yn.
cases where w2.txn_code is not null
and w2.doc_no is not null
and w2.doc_dt is not null
then "case 32'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn, refers_to
of case_2
where refers_to is null
) s2
left outer join
(select i2.item_code, h2.txn_code, h2.doc_no, h2.doc_dt, b2.bat_no
of w_in_table_item_2 i2
inner join
w_in_table_head_2 h2
On i2.h2_sys_id = h2.h2_sys_id
inner join
w_in_table_batch_2 b2
On i2.i2_sys_id = b2.i2_sys_id
) w2
On s2.item_code = w2.item_code
),
case_33 as
(select s3.item_code,
w3.bat_no,
NVL (S3.txn_code, w3.txn_code) txn_code.
NVL (S3.doc_no, w3.doc_no) doc_no.
NVL (S3.updt_dt, w3.doc_dt) updt_dt.
cases where s3.bat_no = w3.bat_no then 'Y' else ' n end batch_yn.
cases where w3.txn_code is not null
and w3.doc_no is not null
and w3.doc_dt is not null
then "case 33'.
end refers_to
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn, refers_to
of case_2
where refers_to is null
) s3
left outer join
(select i3.item_code, h3.txn_code, h3.doc_no, h3.doc_dt, i3.bat_no
of w_in_table_item_3 i3
inner join
w_in_table_head_3 h3
On i3.h3_sys_id = h3.h3_sys_id
) w3
On s3.item_code = w3.item_code
)
Select item_code, bat_no, txn_code, doc_no, boe_dt, batch_yn
of case_1
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_2
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn,
ROW_NUMBER() over (partition by item_code of updt_dt desc order) rn
from (select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_31
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_32
where refers_to is not null
Union of all the
Select item_code, bat_no, txn_code, doc_no, updt_dt, batch_yn
of case_33
where refers_to is not null
)
)
where rn = 1
ITEM_CODE | BAT_NO | TXN_CODE | DOC_NO | BOE_DT | BATCH_YN |
---|---|---|---|---|---|
I1 | B1 | T1 | 1234 | JANUARY 3, 2015 | THERE |
I1 | B30 | T30 | 7890 | FEBRUARY 5, 2015 | N |
I2 | B60 | T60 | 1234 | JANUARY 3, 2015 | N |
I3 | B70 | T70 | 1234 | FEBRUARY 1, 2015 | THERE |
I4 | - | T90 | 1234 | JANUARY 5, 2015 | N |
I6 | - | T40 | 1234 | AUGUST 5, 2015 | N |
Concerning
Etbin
Maybe you are looking for
-
Impossible to change my Skype password somehow!
Hello! I tried to change some of my preferences on Skype Online including password, email address and telephone number. I remember my Skype name or it's password that I created a long time ago. So, I signed up using the Microsoft account. And then I
-
ERROR 7026: CONTINUATION OF THE DRIVER (S) FAILED TO LOAD AFTER RESTART
The event viewer showed this error after startup for 2 or 3 days. The drivers listed are OMCI and NULL. I've been ignoring it causes web search seems to show OMCI is a cost of control tool for IT managers that has to do with the networks... I have on
-
Hello. I want to upgrade the ram on my laptop HP Pavilion g7 - 1010ev. It has 3 GB. Can I put 8 gb 1333 Mhz or 1066 Mhz?
-
Need refund - no download not provided with blackBerry smartphones
Bought an App 3 days ago (QuickLaunch). Never received a download. My world error "to purchase. Despite this, I have a billing invoice indicating that I paid through PayPal successfully. I spent 3 hours trying to get an acceptable download. No luck.
-
I'll put in place a certification first. When you set up reminders, I see reminders that I add don't include "No escalation" which means I can implement escalation. I can't find anything in the documentation for the escalation of notifications to ano