Row counter
I am currently building a count field that increments for groups of consecutive lines.
SELECT EMPLOYEE_ID, DTE, HOURS, JOB, ACTIVITY FROM (SELECT EMPLOYEE_ID, DTE, HOURS, JOB, ACTIVITY_CDE, LAG (JOB, 1, JOB) OVER (ORDER BY DTE) AS prev_jobnum, LEAD (JOB, 1, JOB) OVER (ORDER BY DTE) AS next_jobnum FROM tablename WHERE T_YEAR >= '2012' AND EMPLOYEE_ID = '1234' AND HOURS >= '8' WHERE JOB = 'SICK' AND (JOB = prev_jobnum OR JOB = next_jobnum)
The result:
EMPLOYEE_ID | DTE | HOURS | JOB | ACTIVITY | SICK_COUNTER |
1234 | 11/03/2013 | 7 | SICK | GENLV | 1 |
1234 | 12/03/2013 | 7 | SICK | GENLV | 2 |
1234 | 19/02/2014 | 7 | SICK | GENLV | 3 |
1234 | 20/02/2014 | 7 | SICK | GENLV | 4 |
1234 | 21/02/2014 | 7 | SICK | GENLV | 5 |
1234 | 07/08/2014 | 7 | HOLIDAY | GENLV | 1 |
1234 | 08/08/2014 | 7 | HOLIDAY | GENLV | 2 |
I need the meter to zero when it's not the same consecutive game. The result should look like this:
EMPLOYEE_ID | DTE | HOURS | JOB | ACTIVITY | SICK_COUNTER |
1234 | 11/03/2013 | 7 | SICK | GENLV | 1 |
1234 | 12/03/2013 | 7 | SICK | GENLV | 2 |
1234 | 19/02/2014 | 7 | SICK | GENLV | 1 |
1234 | 20/02/2014 | 7 | SICK | GENLV | 2 |
1234 | 21/02/2014 | 7 | SICK | GENLV | 3 |
1234 | 07/08/2014 | 7 | HOLIDAY | GENLV | 1 |
1234 | 08/08/2014 | 7 | HOLIDAY | GENLV | 2 |
Any ideas?
OK, this is my last attempt for today.
If test you it, don't forget some cases with the holidays.
If you a protection requirements were implemented properly, perhaps we can find a less cumbersome approach:
with pre_grps like)
Select
employee_id
dte
hours
work
activity
-1 start of a new group of sick, 0 no group start, -1 start again no group sick
When job = 'ILL' - sick with precessor ('sick', 'HOLIDAYS') and the offset in hours > = 7 cannot be a new group
and offset (employment) over (partition by employee_id) stopped by dte ('sick', 'HOLIDAY')
and the offset (in hours) over (partition by employee_id) stopped by dte > = 7
then 0
When job = 'HOLIDAY '.
so to case when lag (job 1, employment) over (partition by employee_id stopped by dte) = "SICK".
and offset (hours, 1, 0) over (partition by employee_id stopped by dte) > = 7
and lead(job,1,job) over (partition by employee_id stopped by dte) = "SICK".
and lead (hours, 1, 0) over (partition by employee_id stopped by dte) > = 7
then 0 - holiday cant never be the beginning of a group of consecutive sick if it is included by sick
-Hypothesis: entry only vacation between two entrances for SICK
1 other
end
1 other
end grp_flag
, case when work = 'HOLIDAY '.
so to case when lag (job 1, employment) over (partition by employee_id stopped by dte) = "SICK".
and offset (hours, 1, 0) over (partition by employee_id stopped by dte) > = 7
and lead(job,1,job) over (partition by employee_id stopped by dte) = "SICK".
and lead (hours, 1, 0) over (partition by employee_id stopped by dte) > = 7
then "SICK".
another job
end
another job
end job_drv
from the data
)
grps as)
Select
employee_id
dte
hours
work
activity
sum (grp_flag) on the grp (partition by employee_id stopped by dte)
job_drv
of pre_grps
)
Select
employee_id
dte
hours
work
activity
, sum (case when job_drv = 'SICK' then 1 else 0 end) on counter (partition employe_id, order of grp by dte)
- or perhaps better
-, case when job_drv = "SICK".
-then sum (1) (partition employe_id, order of grp by dte)
-0 otherwise
-end meter
PEB
order by employee_id, dte, counter
Tags: Database
Similar Questions
-
Count the number of rows in a table
Hello
I have a requirement. I want to frame a SQL, which takes the name of schema as input and returns the tables belonging to this scheme and the number of lines in a particular table.
An example of output:
===========
Can someone help me to make a request for the same.Table No. of Rows ~~~~ ~~~~~~~~ A 123 B 126 C 234 . . .
Kind regardsIf you are not sure on the statistics collected, then you need dynamic sql...
DECLARE VNUM NUMBER:=0; VSQL VARCHAR2(4000); vcount number := 0; BEGIN DBMS_OUTPUT.ENABLE(NULL); DBMS_OUTPUT.PUT_LINE(RPAD('TABLE NAME',30,' ')||' '||RPAD('ROW COUNT',10,' ')); DBMS_OUTPUT.PUT_LINE(RPAD('-',30,'-')||' '||RPAD('-',10,'-')); FOR C1 IN (SELECT TABLE_NAME,OWNER FROM ALL_TABLES WHERE OWNER='SCOTT' ORDER BY OWNER,TABLE_NAME) LOOP VSQL := 'SELECT COUNT(*) FROM '||C1.OWNER||'.'||C1.TABLE_NAME; EXECUTE IMMEDIATE VSQL INTO VNUM; DBMS_OUTPUT.PUT_LINE(RPAD(C1.TABLE_NAME,30,' ')||' '||RPAD(VNUM,10,' ')); vcount := vcount +1; END LOOP; DBMS_OUTPUT.PUT_LINE(RPAD('-',length(vcount)+6,'-')); DBMS_OUTPUT.PUT_LINE(vcount||' Rows.'); DBMS_OUTPUT.PUT_LINE(RPAD('-',length(vcount)+6,'-')); END; / TABLE NAME ROW COUNT ------------------------------ ---------- BONUS 0 DEPT 4 EMP 14 SALGRADE 5 ------- 4 Rows. ------- PL/SQL procedure successfully completed. Elapsed: 00:00:00.44
Personally, I think this is a time to kill the process... ;)
HTH,
Prazy -
Table1.rows.Add (SOUTH-dialogue)
Hello DIAdem'ler.
erweitern das von internal (neue line am Tabellenende einfügen) im procedures SOUTH-dialogue
Call Table1.Rows.Add
takes pro line approx. 0.3 seconds. (Intel Core 2 Duo 3 GHz, 2 GB Ram, WIN XP SP3)
A 600 bei der ungefahr 3 Minuten procedure takes. Dies nicht sehr effektiv just.Gestalten schneller Wie kann man diesen procedure?
Oder besser:
There're eine method mit der man eine Tabelle aus dem Script heraus direkt um neue Zeilen X erweitern kann?Vorab vielen Dank fur people Hilfe, frohe Weihnachten und ein guten Rutsch ins Neue Jahr
Hallo!
ES geht als man vielleicht bed easier:
Table1.rows.Count = Table1.Rows.Count + 600
Bei mir dann 0.3s will take all.
Frohes (rest-) Weihnachtsfest und alles Gute as 2009
Matthias
-
Hello Experts,
I have a problem that is a little tricky. Requirement is if 2 columns (region and Code in the tables below) match exactly, then it is Best Fit, if one of the columns match while it is average in shape, if the two columns do not match then it's worse to adapt.
Create Table Table1 (varchar2 (10), varchar2 (10) of the filter region, Code varchar2 (10), revenue Number (15), owner varchar2 (5));
Table1:
Insert into Table1 values ('Test1', 'Midwest', '0900', 3000286, 'P1')
Insert into Table1 values ('Test1', 'Midwest', '0899', 36472323, 'P2')
Insert into Table1 values ('Test1', 'Midwest', '0898', 22472742, "P3")
Insert into Table1 values ('Test1', 'West', '0901', 375237423, 'P1')
Insert into Table1 values ('Test1', 'West', '0700', 34737523, null)
Insert into Table1 values ('Test1', 'West', '0701', 95862077, "P3")
Insert into Table1 values ('Test1', 'South', '0703', 73438953, 'P4')
Insert into Table1 values ('Test1', 'South', '0704', 87332089, 'P1')
Insert into Table1 values ('Test1', 'South', '0705', 98735162, 'P4')
Insert into Table1 values ('Test1', 'South', '0706', 173894762, "P9")
Insert into Table1 values ('Test1', 'South', '0902', 72642511, 'P6')
Create Table Table2 (filter varchar2 (10), region varchar2 (10), Code varchar2 (10), plafond1 Number (15), Limit2 Number (15));
Table2
Insert into Table2 Values ('Test1', 'ALL', ' 0902', 15000, 10000)
Insert into Table2 Values ('Test1', 'ALL', 'ALL', 20000, 12000)
Insert into Table2 Values ('Test1', 'Midwest' ' 0900', 10000, 5000)
Insert into Table2 Values ('Test1', 'Midwest', 'ALL', 18000, 8000)
Insert into Table2 Values ('Test1', 'West', 'ALL', 16000, 6000)
Insert into Table2 Values ('Test1', 'West', '0901', 10000, 5000)
Final output
Filter the income Code region owner plafond1 Limit2
Test1 0900 3000286 P1 10 000 5 000 - Best Midwest (region because both Code Matches)
Test1 0899 36472323 P2 Midwest 18 000 8 000 - way (because the region corresponds to only), we consider 'ALL' for the Code
Test1 0898 22472742 P3 Midwest 18 000 8 000 - way (because the region corresponds to only), we consider 'ALL' for the Code
Test1 West 0901 375237423 10 000 5 000 - Best P1 (region because both Code Matches)
Test1 West 0700 34737523 16 000 6 000 - medium (because the area corresponds to only), we consider 'ALL' for the Code
Test1 West 0701 95862077 P3 16 000 6 000 - way (because the region corresponds to only), we consider 'ALL' for the Code
Test1 South 0703 73438953 P4 20 000 12 000 - worse (because region both Code DON T Match ' "), we consider option as worst 'ALL', 'ALL '.
Test1 South 0704 87332089 P1 20 000 12 000 - worse (because region both Code DON T Match ' "), we consider option as worst 'ALL', 'ALL '.
Test1 South 0705 98735162 P4 20 000 12 000 - worse (because region both Code DON T Match ' "), we consider option as worst 'ALL', 'ALL '.
Test1 South 0706 173894762 P9 20 000 12 000 - worse (because region both Code DON T Match ' "), we consider option as worst 'ALL', 'ALL '.
Test1 South 0902 72642511 P6 15 000 10 000 - way (because the Code corresponds to only) we consider 'ALL' for the region
In the final result, we should have row count equal to Table1, and as soon as there's game (best first, then middle, then the worst), then if is once again, that we should ignore.
There are other columns in the tables as well.
Thank you very much!
As you wish...
select filter, region, code, region2, code2, revenue, owner, limit1, limit2, match from ( select filter, region, code, region2, code2, revenue, owner, limit1, limit2, match, row_number() over( partition by filter, region, code order by match ) priority from ( select a.filter, a.region, a.code, a.revenue, a.owner, b.region region2, b.code code2, b.limit1, b.limit2, case when (a.region, a.code) = ((b.region, b.code)) then 'Best' when a.region = b.region or a.code = b.code then 'Medium' else 'Worst' end match from table1 a join table2 b on a.filter = b.filter and (b.region, b.code) in ( (a.region, a.code), (a.region, 'ALL'), ('ALL', a.code), ('ALL', 'ALL') ) ) ) where priority = 1 order by region, code;
-
OAF: row.length always returns 1
Hello world
EmployeeVOImpl vo1 = getEmployeeRowVO1 ();
Line [] row = vo1.getAllRowsInRange ();
for (int i = 0; i < row.length; i ++) {}
EmployeeRowVOImpl row2i = row2 (EmployeeVORowImpl);
.
.
.
}
The problem is row.length always returns 1 if the loop runs only once.
If I put vo1.getRowCount () instead of row.length, the loop is one iteration for the second time but in error to EmployeeRowVOImpl row2i = (EmployeeVORowImpl) [i] row2; (oracle.apps.fnd.framework.OAException: java.lang.ArrayIndexOutOfBoundsException: 1).
System.out.println ("Row count:" + vo1.getRowCount ()); Returns 450
VO has 450 records.
What can be wrong here?
Any suggestions would be really grateful...
Thank you.Hello
You can try the code below
OAViewObject vo = (OAViewObject)getEmployeeRowVO1(); EmployeeRowVORowImpl row = null; int fetchedRowCount = vo.getFetchedRowCount(); RowSetIterator yourIter = vo.createRowSetIterator("yourIter"); if (fetchedRowCount > 0) { yourIter.setRangeStart(0); yourIter.setRangeSize(fetchedRowCount); for (int i = 0; i < fetchedRowCount; i++) { row = (EmployeeRowVORowImpl)yourIter.getRowAtRangeIndex(i); } } yourIter.closeRowSetIterator();
Thank you
JITPublished by: appsjit on Sep 10, 2012 15:10
-
How to display the total number of rows in the dashboard
Hello
I have a dashboard report for retrieving the list of projects and details, it grows on a daily basis, instead of users downloading the report and find out the total number of projects, I want to display 'the total number of projects' alone in the dashboard. How can I do?
Also is it possible to do like a pop up or something a little flash news - not necessary, but will be very good if I can do it.
Thanks for your time and your help.create a report and a column to write a column invert the function max (rcount (1)). Call this column depending on the position of the column (as @1) in narrative mode.
You can view only the narrative in the dashboard.for flash type of report, you can use the ticker view and call the same column in the view.
refer to this link to view the total number of records
http://Siebel.ITtoolbox.com/groups/technical-functional/Siebel-Analytics-l/display-row-count-in-top-of-the-table-view-3704999assign points if found useful.
-
How to get the total line count COUNT (*) SELECT and put on a page?
Hello
I use JDeveloper 10.1.3.4. I need get the total number of rows in a table and display it on a page and the problem in doing so. At the sqlplus prompt the row count simply would be, for example:
I wonder if having this simple number must be so complicated and if there is more simple, better ways. Here's how I do it and the problem encountered.select count(*) from BILL;
1. the name of the page to display the number is summary.jspx. It has a grain of support that "summed" as the managed bean name in faces - config.xml, and the name of the bean class is "summary." The component output on the page is:
2. in summary the bean code is:<h:outputText value="#{summary.totalStudentsCount}" binding="#{summary.outputText5}" id="outputText5"/>
The idea of calling a stored function usnig as a "helper" method is 25.5.3 in the Developer's guide, which is closest to you in my need. It's for the functions with the one IN argument; but in my case, the function is not any argument IN. That's why, when you call the helper method CallStoredFunction(), I gave an empty as the last argument and amazing array that caused the problem:private Number totalStudentsCount; public static int NUMBER = Types.NUMERIC; public void setTotalStudentsCount(Number totalStudentsCount) { this.totalStudentsCount = totalStudentsCount; } public Number getTotalStudentsCount() { ZBLCModuleImpl zblcam = getZBLCModuleImpl(); LoggedInStudentImpl studentTable = (LoggedInStudentImpl)zblcam.getLoggedInStudent(); String sql = "select count(lsap_uid) from BILL"; studentTable.setQuery(sql); return (Number)CallStoredFunction(NUMBER, "get_total_students(?)", new Object[] {}); } private ZBLCModuleImpl getZBLCModuleImpl() { FacesContext fc = FacesContext.getCurrentInstance(); ValueBinding vb = fc.getApplication().createValueBinding("#{data}"); BindingContext bc = (BindingContext)vb.getValue(fc); DCDataControl dc = bc.findDataControl("ZBLCModuleDataControl"); ApplicationModule am = (ApplicationModule)dc.getDataProvider(); return (ZBLCModuleImpl)am; } protected Object CallStoredFunction(int sqlReturnType, String stmt, Object[] bindVars) { CallableStatement st = null; ZBLCModuleImpl zblcam = getZBLCModuleImpl(); try { st = zblcam.getDBTransaction().createCallableStatement("begin ? := " + stmt + "; end", 0); st.registerOutParameter(1, sqlReturnType); if (bindVars != null) { for (int z = 0; z < bindVars.length; z++) { st.setObject(z + 2, bindVars[z]); } } st.executeUpdate(); return st.getObject(1); } catch (SQLException e) { throw new JboException(e); } finally { if (st != null) { try { st.close(); } catch (SQLException e) { throw new JboException(e); } } } }
3. the registered function has been tested and works fine at the sqlplus prompt:return (Number)CallStoredFunction(NUMBER, "get_total_students(?)", new Object[] {});
4. when the summary.jspx page is run, the browser is full of error messages, the first long line is here (I have split several online for ease of reading):create or replace function get_total_students return NUMBER AS v_student_count NUMBER; BEGIN select count(ldap_uid) into v_student_count from bill; return v_student_count; END;
Thus,.javax.faces.el.EvaluationException: javax.faces.el.EvaluationException: Error getting property 'totalStudentsCount' from bean of type zblc.viewcontroller.backing.staff.Summary: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=Missing IN or OUT parameter at index:: 2
(1) what is the problem? What is this parameter IN or OUT of {color: red} index: 2 {color} consult? It has to do with the empty array as the last argument in the call to the helper method?
(2) this approach is an overdose, and are there more simple and better ways?
Thank you very much for help!
NewmanHello
Is there a specific reason why you don't simply create read only object to display with some count (*) as OFCASES from MYTABLE and then just drag and drop the attribute ofcases in page?
Kind regards
Branislav
-
Report Generation Toolkit - keep the existing table
Previously, I used the TXT (CSV) files to store test data. Some tests to run for 1000 hours and collect anywhere from a few lines of thousands of data from more than 100,000 rows of data. Using this approach, I have to write macros VBA to parse and format of these data, which are very time-consuming. I'm looking to try to use the LabVIEW Toolkit for generation report (GTA) to write directly in an Excel spreadsheet and do some steps on the fly as the VBA macro, to help reduce the processing time and manual work on the analysis of the data.
However, my concern is that sometimes one must quit school essay through a long trial and then continue and add data to the existing data. I don't see how using the GTA. I guess you would have to locate the latest data from the spreadsheet... using a TXT file, LabVIEW is done automatically. In Excel, I use VBA code like this:
RowMax is SheetRD.Cells ("A", SheetRD.Rows.Count). End (xlUp) .row
The GTA has features like that, or I'll have to call a macro and then work with the returned value of RowMax?
You can use Excel get last Row.vi of the specific generation of report-> Excel-> General Excel palette. Add 1 to get the line empty next.
Ben64
-
display data in a tabular format
Dear Sir
I followed the example on
which really helped me to solve the problem of New York.
now my data shows but its display twice.
Here is my code to display data.
for (int count = 0; count)< 4;="">
{
ranks [count] = new VerticalFieldManager(VerticalFieldManager.NO_HORIZONTAL_SCROLL |)
VerticalFieldManager.NO_VERTICAL_SCROLL);
Add 21 rows of data in the column
displayData = this.split (data, ' |');
for (rowCount int = 0; rowCount)< displaydata.length="" ;="">
{
SB. Delete (0, sb.length ());
SB. Append ("data");
SB. Append (Count);
SB. Append (",");
SB. Append (RowCount);
SB. Append("");
SB = String (displayData [RowCount]);
displayData [rowCount] = sb.toString ();
SB. Append (displayData [RowCount]); ((/ /, 0, rowCount); //displayData [rowCount]);
SB. Append("|");
ranks [count] .add (LabelField (sb.toString (new), LabelField.FOCUSABLE));
}
Add the line to the rowHolder.
rowHolder.add (rows [count]);
}
dataScroller.add (rowHolder);
Add (dataScroller);Help, please
Rgds
Nadir
I think that you find wil this code gives you better results:
How - to create a presentation of the rich UI at TableLayoutManager
Article number: DB-00783
http://www.BlackBerry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/800505/800508/...There is also a blog about this handler:
-
Hi Experts,
I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.
I have an insert query that is go search for records of the partitioned table.
Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.
The parameters are given below
VALUE OF TYPE NAME
------------------------------------ ----------- ----------
DBFIPS_140 boolean FALSE
O7_DICTIONARY_ACCESSIBILITY boolean FALSE
whole active_instance_count
aq_tm_processes integer 1
ARCHIVE_LAG_TARGET integer 0
asm_diskgroups chain
asm_diskstring chain
asm_power_limit integer 1
asm_preferred_read_failure_groups string
audit_file_dest string C:\APP\ADM
audit_sys_operations Boolean TRUEAUDIT_TRAIL DB string
awr_snapshot_time_offset integer 0
background_core_dump partial string
background_dump_dest string C:\APP\PRO
\RDBMS\TRA
BACKUP_TAPE_IO_SLAVES boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
cell_offload_compaction ADAPTIVE channel
cell_offload_decryption Boolean TRUE
cell_offload_parameters string
cell_offload_plan_display string AUTO
cell_offload_processing Boolean TRUE
cell_offloadgroup_name string
whole circuits
whole big client_result_cache_lag 3000
client_result_cache_size big integer 0
clonedb boolean FALSE
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects chain
commit_logging string
commit_point_strength integer 1
commit_wait string
string commit_write
common_user_prefix string C#.
compatible string 12.1.0.2.0
connection_brokers string ((TYPE = DED
((TYPE = EM
control_file_record_keep_time integer 7
control_files string G:\ORACLE\TROL01. CTL
FAST_RECOV
NTROL02. CT
control_management_pack_access string diagnostic
core_dump_dest string C:\app\dia
bal12\cdum
cpu_count integer 4
create_bitmap_area_size integer 8388608
create_stored_outlines string
cursor_bind_capture_destination memory of the string + tell
CURSOR_SHARING EXACT stringcursor_space_for_time boolean FALSE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_big_table_cache_percent_target string 0
db_block_buffers integer 0
db_block_checking FALSE string
db_block_checksum string TYPICAL
Whole DB_BLOCK_SIZE 8192db_cache_advice string WE
db_cache_size large integer 0
db_create_file_dest chain
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
db_domain chain
db_file_multiblock_read_count integer 128
db_file_name_convert chainDB_FILES integer 200
db_flash_cache_file string
db_flash_cache_size big integer 0
db_flashback_retention_target around 1440
chain of db_index_compression_inheritance NONE
DB_KEEP_CACHE_SIZE big integer 0
chain of db_lost_write_protect NONE
db_name string ORCL
db_performance_profile string
db_recovery_file_dest string G:\Oracle\
y_Area
whole large db_recovery_file_dest_size 12840M
db_recycle_cache_size large integer 0
db_securefile string PREFERRED
channel db_ultra_safe
db_unique_name string ORCL
db_unrecoverable_scn_tracking Boolean TRUE
db_writer_processes integer 1
dbwr_io_slaves integer 0
DDL_LOCK_TIMEOUT integer 0
deferred_segment_creation Boolean TRUE
dg_broker_config_file1 string C:\APP\PRO
\DATABASE\
dg_broker_config_file2 string C:\APP\PRO
\DATABASE\
dg_broker_start boolean FALSE
diagnostic_dest channel directory
disk_asynch_io Boolean TRUE
dispatchers (PROTOCOL = string
12XDB)
distributed_lock_timeout integer 60
dml_locks whole 2076
whole dnfs_batch_size 4096dst_upgrade_insert_conv Boolean TRUE
enable_ddl_logging boolean FALSE
enable_goldengate_replication boolean FALSE
enable_pluggable_database boolean FALSE
event string
exclude_seed_cdb_view Boolean TRUE
fal_client chain
fal_server chain
FAST_START_IO_TARGET integer 0
fast_start_mttr_target integer 0
fast_start_parallel_rollback string LOW
file_mapping boolean FALSE
fileio_network_adapters string
filesystemio_options chain
fixed_date chain
gcs_server_processes integer 0
global_context_pool_size string
global_names boolean FALSE
global_txn_processes integer 1
hash_area_size integer 131072
channel heat_map
hi_shared_memory_address integer 0hs_autoregister Boolean TRUE
iFile file
inmemory_clause_default string
inmemory_force string by DEFAULT
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
instance_groups string
instance_name string ORCL
instance_number integer 0
instance_type string RDBMS
instant_restore boolean FALSE
java_jit_enabled Boolean TRUE
java_max_sessionspace_size integer 0
JAVA_POOL_SIZE large integer 0
java_restrict string no
java_soft_sessionspace_limit integer 0
JOB_QUEUE_PROCESSES around 1000
LARGE_POOL_SIZE large integer 0
ldap_directory_access string NONE
ldap_directory_sysauth string no.
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
listener_networks string
LOCAL_LISTENER (ADDRESS = string
= i184borac
(NET) (PORT =
lock_name_space string
lock_sga boolean FALSE
log_archive_config string
Log_archive_dest chain
Log_archive_dest_1 chain
LOG_ARCHIVE_DEST_10 string
log_archive_dest_11 string
log_archive_dest_12 string
log_archive_dest_13 string
log_archive_dest_14 string
log_archive_dest_15 string
log_archive_dest_16 string
log_archive_dest_17 string
log_archive_dest_18 string
log_archive_dest_19 string
LOG_ARCHIVE_DEST_2 string
log_archive_dest_20 string
log_archive_dest_21 string
log_archive_dest_22 string
log_archive_dest_23 string
log_archive_dest_24 string
log_archive_dest_25 string
log_archive_dest_26 string
log_archive_dest_27 string
log_archive_dest_28 string
log_archive_dest_29 string
log_archive_dest_3 string
log_archive_dest_30 string
log_archive_dest_31 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
allow the chain of log_archive_dest_state_1
allow the chain of log_archive_dest_state_10
allow the chain of log_archive_dest_state_11
allow the chain of log_archive_dest_state_12
allow the chain of log_archive_dest_state_13
allow the chain of log_archive_dest_state_14
allow the chain of log_archive_dest_state_15
allow the chain of log_archive_dest_state_16
allow the chain of log_archive_dest_state_17
allow the chain of log_archive_dest_state_18
allow the chain of log_archive_dest_state_19
allow the chain of LOG_ARCHIVE_DEST_STATE_2allow the chain of log_archive_dest_state_20
allow the chain of log_archive_dest_state_21
allow the chain of log_archive_dest_state_22
allow the chain of log_archive_dest_state_23
allow the chain of log_archive_dest_state_24
allow the chain of log_archive_dest_state_25
allow the chain of log_archive_dest_state_26
allow the chain of log_archive_dest_state_27
allow the chain of log_archive_dest_state_28
allow the chain of log_archive_dest_state_29
allow the chain of log_archive_dest_state_3allow the chain of log_archive_dest_state_30
allow the chain of log_archive_dest_state_31
allow the chain of log_archive_dest_state_4
allow the chain of log_archive_dest_state_5
allow the chain of log_archive_dest_state_6
allow the chain of log_archive_dest_state_7
allow the chain of log_archive_dest_state_8
allow the chain of log_archive_dest_state_9
log_archive_duplex_dest string
log_archive_format string ARC%S_%R.%
log_archive_max_processes integer 4log_archive_min_succeed_dest integer 1
log_archive_start Boolean TRUE
log_archive_trace integer 0
whole very large log_buffer 28784K
log_checkpoint_interval integer 0
log_checkpoint_timeout around 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert chain
whole MAX_DISPATCHERS
max_dump_file_size unlimited string
max_enabled_roles integer 150
whole max_shared_servers
max_string_size string STANDARD
memory_max_target big integer 0
memory_target large integer 0
NLS_CALENDAR string GREGORIAN
nls_comp BINARY string
nls_currency channel u
string of NLS_DATE_FORMAT DD-MON-RR
nls_date_language channel ENGLISH
string nls_dual_currency C
nls_iso_currency string UNITED KINnls_language channel ENGLISH
nls_length_semantics string OCTET
string nls_nchar_conv_excp FALSE
nls_numeric_characters chain.,.
nls_sort BINARY string
nls_territory string UNITED KIN
nls_time_format HH24.MI string. SS
nls_time_tz_format HH24.MI string. SS
chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
noncdb_compatible boolean FALSE
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
olap_page_pool_size big integer 0
open_cursors integer 300
Open_links integer 4
open_links_per_instance integer 4
optimizer_adaptive_features Boolean TRUE
optimizer_adaptive_reporting_only boolean FALSE
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 12.1.0.2optimizer_index_caching integer 0
OPTIMIZER_INDEX_COST_ADJ integer 100
optimizer_inmemory_aware Boolean TRUE
the string ALL_ROWS optimizer_mode
optimizer_secure_view_merging Boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines Boolean TRUE
OPS os_authent_prefix string $
OS_ROLES boolean FALSE
parallel_adaptive_multi_user Boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_degree_level integer 100
parallel_degree_limit string CPU
parallel_degree_policy chain MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
PARALLEL_MAX_SERVERS integer 160
parallel_min_percent integer 0
parallel_min_servers integer 16parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
pdb_file_name_convert string
pdb_lockdown string
pdb_os_credential string
permit_92_wrap_format Boolean TRUE
pga_aggregate_limit great whole 4914M
whole large pga_aggregate_target 2457M-
Plscope_settings string IDENTIFIER
plsql_ccflags string
plsql_code_type chain INTERPRETER
plsql_debug boolean FALSE
plsql_optimize_level integer 2
plsql_v2_compatibility boolean FALSE
plsql_warnings DISABLE channel: AL
PRE_PAGE_SGA Boolean TRUE
whole process 300
processor_group_name string
query_rewrite_enabled string TRUE
applied query_rewrite_integrity chain
rdbms_server_dn chain
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
Recyclebin string on
redo_transport_user string
remote_dependencies_mode string TIMESTAMP
remote_listener chain
Remote_login_passwordfile string EXCLUSIVE
REMOTE_OS_AUTHENT boolean FALSE
remote_os_roles boolean FALSEreplication_dependency_tracking Boolean TRUE
resource_limit Boolean TRUE
resource_manager_cpu_allocation integer 4
resource_manager_plan chain
result_cache_max_result integer 5
whole big result_cache_max_size K 46208
result_cache_mode chain MANUAL
result_cache_remote_expiration integer 0
resumable_timeout integer 0
rollback_segments chain
SEC_CASE_SENSITIVE_LOGON Boolean TRUEsec_max_failed_login_attempts integer 3
string sec_protocol_error_further_action (DROP, 3)
sec_protocol_error_trace_action string PATH
sec_return_server_release_banner boolean FALSE
disable the serial_reuse chain
service name string ORCL
session_cached_cursors integer 50
session_max_open_files integer 10
entire sessions 472
Whole large SGA_MAX_SIZE M 9024
Whole large SGA_TARGET M 9024
shadow_core_dump string no
shared_memory_address integer 0
SHARED_POOL_RESERVED_SIZE large integer 70464307
shared_pool_size large integer 0
whole shared_server_sessions
SHARED_SERVERS integer 1
skip_unusable_indexes Boolean TRUE
smtp_out_server chain
sort_area_retained_size integer 0
sort_area_size integer 65536
spatial_vector_acceleration boolean FALSE
SPFile string C:\APP\PRO
\DATABASE\
sql92_security boolean FALSE
SQL_Trace boolean FALSE
sqltune_category string by DEFAULT
standby_archive_dest channel % ORACLE_HO
standby_file_management string MANUAL
star_transformation_enabled string TRUE
statistics_level string TYPICAL
STREAMS_POOL_SIZE big integer 0
tape_asynch_io Boolean TRUEtemp_undo_enabled boolean FALSE
entire thread 0
threaded_execution boolean FALSE
timed_os_statistics integer 0
TIMED_STATISTICS Boolean TRUE
trace_enabled Boolean TRUE
tracefile_identifier chain
whole of transactions 519
transactions_per_rollback_segment integer 5
UNDO_MANAGEMENT string AUTO
UNDO_RETENTION integer 900undo_tablespace string UNDOTBS1
unified_audit_sga_queue_size integer 1048576
use_dedicated_broker boolean FALSE
use_indirect_data_buffers boolean FALSE
use_large_pages string TRUE
user_dump_dest string C:\APP\PRO
\RDBMS\TRA
UTL_FILE_DIR chain
workarea_size_policy string AUTO
xml_db_events string enableThanks in advance
Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.
Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.
Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more. All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.
From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table. The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K. Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.
It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented. Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K. If the math is not in terms of 10g. In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money. But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.
The plan of 12 starts with similar table scans but in a different order. The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366. It is the only main component of the final total cost of 95 373.
Suggestions for what to do? It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses. And there is other information that you have not provided - see later.
You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result. So be careful. For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like. But I would not use such a simple, unique tip in a production system for a variety of reasons. For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.
The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time. (It is a mischaracterization of the works of parallel query how). If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries. See the end of this answer for the additional information that you may provide.
But I would be very suspicious of the hardware configuration of the two systems. Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks. That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.
Remember what I said in my last reply:
"Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL. In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "
When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her. A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least. It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.
You may also provide us with the following information which would allow a better analysis:
- Row counts in each tables referenced in the query, and if one of them are partitioned.
- Hardware configurations for both systems - the 10g and the 12 a. Number of processors, the model number and speed, physical memory, CPU of discs.
- The discs are very important - 10g and 12 c have similar disk subsystems? You use simple old records, or you have a San, or some sort of disk array? Are the bays of identical drives in both systems? How are they connected? Fast Fibre Channel, or something else? Maybe even network storage?
- What is the size of the SGA in both systems? of values for MEMORY_TARGET and SGA_TARGET.
- The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g. I guess he does, but I would like that it confirmed to be safe.
John Brady
-
Index is out of bounds of the array in OracleUdt.SetValue)
I need best eyes on it, I was beaten mine top for days now.
I have a lot of classes built to pass the Oracle UDT to a procedure in a package. They all work, including several that are almost identical to the one sent me adjustments. But it returns the error "Index is off limits..." by calling OracleUdt.SetValue ().
The absolute minimum code is below, and it's a mouth-full. My apologies for the length.
-Types of oracle-
create or replace type VARRAY IS of DMA_NUM_Varray (250), OF NUMBER;
-In a Package Oracle.
PROCEDURE Create_commercials_Owr (f_dma_num_tab IN DMA_NUM_Varray) IS...
This procedure made 4 other settings, including 2 UDT, all defined before this one on the list of parameters. One of them is an another VArray (50), and no error is returned on it, but only on the DMANumberArray.
---C# . NET-
public class DMANumberArray: INullable, {IOracleCustomType}
[OracleArrayMapping()]
public OracleDecimal [table;
Private bool isNull.
private OracleUdtStatus statusArray [];
public OracleUdtStatus [{StatusArray}
Get {}
Return this.statusArray;
}
{Set
this.statusArray = value;
}
}
public virtual bool IsNull {}
Get {}
isNull feedback;
}
}
public static {NULL DMANumberArray
Get {}
Did DMANumberArray = new DMANumberArray();
did.isNull = true;
back has done;
}
}
public virtual void FromCustomObject (OracleConnection, IntPtr udt oracleConn) {}
OracleUdt.SetValue (oracleConn, udt, 0, array, statusArray);
}
public virtual void ToCustomObject (OracleConnection, IntPtr udt oracleConn) {}
Object objectStatusArray = null;
Table = (OracleDecimal []) OracleUdt.GetValue (oracleConn, udt, 0, out objectStatusArray);
statusArray = objectStatusArray (OracleUdtStatus []);
}
}
[OracleCustomTypeMapping ("APCTS. DMA_NUM_VARRAY")]
public class DMANumberArrayFactory: IOracleCustomTypeFactory, IOracleArrayTypeFactory {}
public IOracleCustomType CreateObject() {}
return new DMANumberArray();
}
public Array CreateArray (int elementCount) {}
return of new OracleDecimal [elementCount value];
}
public Array CreateStatusArray (int elementCount) {}
return new OracleUdtStatus [elementCount value];
}
}
DataTable dmaTable = new DataTable();
using (da SqlDataAdapter = new SqlDataAdapter (query, sql)) {}
Bah Fill (dmaTable);
}
DMANumberArray dma = new DMANumberArray();
idCount = dmaTable.Rows.Count;
If idCount (idCount > 250) = 250; The error occurs for all values > = 5, but good for 1-4
DMA. Table = new OracleDecimal [idCount]; limit of 250
for (int i = 0; i < idCount; i ++) {}
DMA. Table [i] = OracleDecimal.Parse (dmaTable.Rows [i] ["DMA_Number"]. (ToString());
}
DMA. StatusArray = new OracleUdtStatus [] {OracleUdtStatus.NotNull, OracleUdtStatus.Null, OracleUdtStatus.NotNull, OracleUdtStatus.NotNull};
string query = "APCTS. OWR_APIS. Create_commercials_Owr ';
com. connection = oracle;
using (OracleCommand cmd = new OracleCommand (query, oracle)) {}
cmd.CommandType = CommandType.StoredProcedure;
OracleParameter paramDMAArrayObject = new OracleParameter();
paramDMAArrayObject.OracleDbType = OracleDbType.Array;
paramDMAArrayObject.Direction = ParameterDirection.Input;
paramDMAArrayObject.UdtTypeName = 'APCTS. DMA_NUM_VARRAY ';
paramDMAArrayObject.Value = dma;
cmd. Parameters.Add (paramDMAArrayObject);
cmd ExecuteNonQuery());
}
I can't for the life of see me where anything is something more bigger than the indexing table as it was size and limited to 250 elements.
The only weird thing I see belongs to the class DMANumberArrayFactory, specifically CreateArray. When I break here, the value of the value of elementCount is always zero, even when the table of the UDT object was created with values greater than zero.
What did I miss?
Found the problem. I was looking at the wrong table. This is the picture of the situation that is causing the error. Nothing in the documentation explains clearly what this table is for, or that its size should match the size of the array, or why you would define each item to Null or not null.
But once I size to match and set each element, the error is gone. Bad documentation. Who writes these things, and why they write them so as to give full explanations?
-
Under SQL result in development in a few seconds, but in production continues to run. Explain plan is also less.
Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production
CURSOR_SHARING is set to EXACT (production) and FORCE (development) - is this the reason?
select wonum from workorder where worktype in ('EM','CM') and siteid = 'DWS_DSS' and historyflag = 0 and (exists (select null from dcw_ddotpermits b where workorder.wonum = b.wonum and workorder.siteid = b.siteid and b.permittype in ('Construction Permit', 'Occupancy Permit') and b.permitenddate > sysdate group by b.wonum, b.permittype having count(wonum) > 1));
Explain the Plan of Production (which is slow)
------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 26 | 162K (17)| 00:32:30 | | | |* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| WORKORDER | 1491 | 38766 | 4259 (1)| 00:00:52 | 1 | 1 | |* 2 | INDEX RANGE SCAN | WORKORDER_NDX20 | 2399 | | 988 (1)| 00:00:12 | | | |* 3 | FILTER | | | | | | | | | 4 | HASH GROUP BY | | 1 | 35 | 6 (17)| 00:00:01 | | | |* 5 | TABLE ACCESS BY INDEX ROWID | DCW_DDOTPERMITS | 1 | 35 | 5 (0)| 00:00:01 | | | |* 6 | INDEX RANGE SCAN | W_DDOTPERMITS_NDX2 | 3 | | 2 (0)| 00:00:01 | | | ------------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("WORKTYPE"='CM' OR "WORKTYPE"='EM') 2 - access("SITEID"='DWS_DSS' AND "HISTORYFLAG"=0) filter( EXISTS (SELECT 0 FROM "MAXIMO"."DCW_DDOTPERMITS" "B" WHERE "B"."WONUM"=:B1 AND ("B"."PERMITTYPE"='Construction Permit' OR "B"."PERMITTYPE"='Occupancy Permit') AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B2 GROUP BY "B"."WONUM","B"."PERMITTYPE" HAVING COUNT(*)>1)) 3 - filter(COUNT(*)>1) 5 - filter(("B"."PERMITTYPE"='Construction Permit' OR "B"."PERMITTYPE"='Occupancy Permit') AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B1) 6 - access("B"."WONUM"=:B1)
Explain the Plan of development (which is fast)
----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 25 | 28247 (17)| 00:05:39 | |* 1 | FILTER | | | | | | |* 2 | VIEW | index$_join$_001 | 7991 | 195K| 985 (1)| 00:00:12 | |* 3 | HASH JOIN | | | | | | | 4 | INLIST ITERATOR | | | | | | |* 5 | INDEX RANGE SCAN | WWORKORDER_NDX32 | 7991 | 195K| 279 (2)| 00:00:04 | |* 6 | INDEX RANGE SCAN | WORKORDER_NDX20 | 7991 | 195K| 973 (1)| 00:00:12 | |* 7 | FILTER | | | | | | | 8 | HASH GROUP BY | | 1 | 39 | 6 (17)| 00:00:01 | |* 9 | TABLE ACCESS BY INDEX ROWID| DCW_DDOTPERMITS | 1 | 39 | 5 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | W_DDOTPERMITS_NDX2 | 3 | | 2 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter( EXISTS (SELECT 0 FROM "MAXIMO"."DCW_DDOTPERMITS" "B" WHERE "B"."WONUM"=:B1 AND ("B"."PERMITTYPE"=:SYS_B_4 OR "B"."PERMITTYPE"=:SYS_B_5) AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B2 GROUP BY "B"."WONUM","B"."PERMITTYPE" HAVING COUNT(*)>:SYS_B_6)) 2 - filter("HISTORYFLAG"=:SYS_B_3 AND "SITEID"=:SYS_B_2 AND ("WORKTYPE"=:SYS_B_0 OR "WORKTYPE"=:SYS_B_1)) 3 - access(ROWID=ROWID) 5 - access("WORKTYPE"=:SYS_B_0 OR "WORKTYPE"=:SYS_B_1) 6 - access("SITEID"=:SYS_B_2 AND "HISTORYFLAG"=:SYS_B_3) 7 - filter(COUNT(*)>:SYS_B_6) 9 - filter(("B"."PERMITTYPE"=:SYS_B_4 OR "B"."PERMITTYPE"=:SYS_B_5) AND "B"."PERMITENDDATE">SYSDATE@! AND "B"."SITEID"=:B1) 10 - access("B"."WONUM"=:B1)
It looks more like a problem with the size of the data and related statistics. (It may be simply the outdated statistics on dev).
In the production of a larger data NDX20 index on its own looks too expensive to the optimizer so it combines this with the index of ndx32 to avoid visiting the table.
The first step to study the problem would be to check if the estimated row counts are realistic for individual systems, and then to determine if the data cluster for the production system is much better than Oracle think it is - if yes, then (for newer versions) affecting the preference of table 'table made of blocks cached' a realistic value can help ( https://jonathanlewis.wordpress.com/2015/11/02/clustering_factor-4/ ). If all else fails and the dev is good then allusion and capture of a Plan SQL database may be required.
Concerning
Jonathan Lewis
-
In passing the huge parameter to oracle procedure have a performance hit?
I have a script attached, in which I am trying process/XML parsing in a table (STAGE_TBL) in the XMLTYPE column and insert the data analyzed in another table (PROCESSED_DATA_TBL). The XML file can be huge up to 2MB, which translates into approximately 2000 + lines of analyzed data. The issue I see is when I pass an XML object to a procedure (STAGE_TBL_PROCESS) to analyze its takes about 10 seconds per XML, but rather than from XML if I directly pick up table in the procedure (STAGE_TBL_PROCESS) passing the ID to be about 0.15 seconds. According to the document while params are passed by reference, so why is this variation of performance?
Details of database Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64-bit version of PL/SQL Production 11.2.0.3.0 - Production "CORE 11.2.0.3.0 Production" TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
Note: I could not perform SQL_TRACE or DBMS_STATS as I don't have access to them.
/*
This one is taking .15 seconds to process an XML with about 2000 rp_sendRow elements
*/
DECLARE
CURSOR NewStage IS
SELECT *
FROM STAGE_TBL
WHERE status = 'N'
ORDER BY PUT_TIME ASC;
SUBTYPE rt_NewStage IS NewStage % rowtype;
ROW_COUNT INTEGER := 0; -- Return value from calling the procedure
READ_COUNT INTEGER := 0; -- Number of rows read from the stage table
INSERT_COUNT_TOTAL INTEGER := 0; -- Number of Inserts Inven records
ERROR_COUNT INTEGER := 0; -- Number of Inven inserts that did inserted more then 1 row in Inven
PROCESS_STATUS STATUS.MmsStatus;
STATUS_DESCRIPTION STATUS.MmsStatusReason;
ERRMSG VARCHAR2(500);
PROCEDURE STAGE_TBL_PROCESS (IDDATA IN RAW, PROCESS_STATUS OUT VARCHAR2, STATUS_DESCRIPTION OUT VARCHAR2, ROW_COUNT OUT NUMBER) AS
/*
This procedure is to parse the XML from STAGE_TBL and populate the data from XML to PROCESSED_DATA_TBL table
IN PARAMS
----------
IDDATA - ID from STAGE_TBL
xData - XMLType field from XML_DOCUMENT of STAGE_TBL
OUT PARAMS
-----------
PROCESS_STATUS - The STATUS of parsing and populating PROCESSED_DATA_TBL
STATUS_DESCRIPTION - The description of the STATUS of parsing and populating PROCESSED_DATA_TBL
ROW_COUNT - Number of rows inserted into PROCESSED_DATA_TBL
*/
BEGIN
INSERT ALL INTO PROCESSED_DATA_TBL
(PD_ID,
STORE,
SALES_NBR,
UNIT_COST,
ST_FLAG,
ST_DATE,
ST,
START_QTY,
START_VALUE,
START_ON_ORDER,
HAND,
ORDERED,
COMMITED,
SALES,
RECEIVE,
VALUED,
ID_1,
ID_2,
ID_3,
UNIT_PRICE,
EFFECTIVE_DATE,
STATUS,
STATUS_DATE,
STATUS_REASON)
VALUES (IDDATA
,store
,SalesNo
,UnitCost
,StWac
,StDt
,St
,StartQty
,StartValue
,StartOnOrder
,Hand
,Ordered
,COMMITED
,Sales
,Rec
,Valued
,Id1
,Id2
,Id3
,UnitPrice
,to_Date(EffectiveDate||' '||EffectiveTime, 'YYYY-MM-DD HH24:MI:SS')
,'N'
,SYSDATE
,'XML PROCESS INSERT')
WITH T AS
( SELECT STG.XML_DOCUMENT FROM STAGE_TBL STG WHERE STG.ID = IDDATA)
-- This is to parse and fetch the data from XML
SELECT E.* FROM T, XMLTABLE('rp_send/rp_sendRow' PASSING T.XML_DOCUMENT COLUMNS
store VARCHAR(20) PATH 'store'
,SalesNo VARCHAR(20) PATH 'sales'
,UnitCost NUMBER PATH 'cost'
,StWac VARCHAR(20) PATH 'flag'
,StDt DATE PATH 'st-dt'
,St NUMBER PATH 'st'
,StartQty NUMBER PATH 'qty'
,StartValue NUMBER PATH 'value'
,StartOnOrder NUMBER PATH 'start-on-order'
,Hand NUMBER PATH 'hand'
,Ordered NUMBER PATH 'order'
,Commited NUMBER PATH 'commit'
,Sales NUMBER PATH 'sales'
,Rec NUMBER PATH 'rec'
,Valued NUMBER PATH 'val'
,Id1 VARCHAR(30) PATH 'id-1'
,Id2 VARCHAR(30) PATH 'id-2'
,Id3 VARCHAR(30) PATH 'id-3'
,UnitPrice NUMBER PATH 'unit-pr'
,EffectiveDate VARCHAR(30) PATH 'eff-dt'
,EffectiveTime VARCHAR(30) PATH 'eff-tm'
) E;
ROW_COUNT := SQL%ROWCOUNT; -- Not the # of all the rows inserted.
PROCESS_STATUS := STATUS.PROCESSED;
IF ROW_COUNT < 1 THEN -- The insert failed Row Count = 0 No exception thrown
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := 'ERROR Did not insert into Pos Inventory. Reason Unknown';
END IF;
EXCEPTION
WHEN OTHERS THEN
ROW_COUNT := 0;
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := 'SqlCode:' || SQLCODE || ' SqlErrMsg:' || SQLERRM;
END;
BEGIN
DBMS_OUTPUT.enable(NULL);
FOR A_NewStage IN NewStage
LOOP
READ_COUNT := READ_COUNT + 1;
STAGE_TBL_PROCESS(A_NewStage.ID, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
INSERT_COUNT_TOTAL := INSERT_COUNT_TOTAL + ROW_COUNT;
IF(ROW_COUNT <= 0 OR PROCESS_STATUS = STATUS.ERROR) THEN
ERROR_COUNT := ERROR_COUNT + 1;
UPDATE STAGE_TBL
SET status = PROCESS_STATUS,
status_DATE = SYSDATE,
status_DESCRIPTION = STATUS_DESCRIPTION
WHERE ID = A_NewStage.ID;
ELSE
UPDATE STAGE_TBL
SET status = PROCESS_STATUS,
status_DATE = SYSDATE,
status_DESCRIPTION = STATUS_DESCRIPTION,
SHRED_DT = SYSDATE
WHERE ID = A_NewStage.ID;
END IF;
COMMIT;
END LOOP;
COMMIT;
IF ERROR_COUNT > 0 THEN
ERRMSG := '** ERROR: ' || ERROR_COUNT || ' Stage records did not insert in to the Processed table correctly';
RAISE_APPLICATION_ERROR(-20001,ErrMsg);
END IF;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END ;
/*
This one is taking 10 seconds to process an XML with about 2000 rp_sendRow elements
*/
DECLARE
CURSOR NewStage IS
SELECT *
FROM STAGE_TBL
WHERE status = 'N'
ORDER BY PUT_TIME ASC;
SUBTYPE rt_NewStage IS NewStage % rowtype;
ROW_COUNT INTEGER := 0; -- Return value from calling the procedure
READ_COUNT INTEGER := 0; -- Number of rows read from the stage table
INSERT_COUNT_TOTAL INTEGER := 0; -- Number of Inserts Inven records
ERROR_COUNT INTEGER := 0; -- Number of Inven inserts that did inserted more then 1 row in Inven
PROCESS_STATUS STATUS.MmsStatus;
STATUS_DESCRIPTION STATUS.MmsStatusReason;
ERRMSG VARCHAR2(500);
PROCEDURE STAGE_TBL_PROCESS (IDDATA IN RAW, xData IN STAGE_TBL.XML_DOCUMENT%TYPE, PROCESS_STATUS OUT VARCHAR2, STATUS_DESCRIPTION OUT VARCHAR2, ROW_COUNT OUT NUMBER) AS
/*
This procedure is to parse the XML from STAGE_TBL and populate the data from XML to PROCESSED_DATA_TBL table
IN PARAMS
----------
IDDATA - ID from STAGE_TBL
xData - XMLType field from XML_DOCUMENT of STAGE_TBL
OUT PARAMS
-----------
PROCESS_STATUS - The STATUS of parsing and populating PROCESSED_DATA_TBL
STATUS_DESCRIPTION - The description of the STATUS of parsing and populating PROCESSED_DATA_TBL
ROW_COUNT - Number of rows inserted into PROCESSED_DATA_TBL
*/
BEGIN
INSERT ALL INTO PROCESSED_DATA_TBL
(PD_ID,
STORE,
SALES_NBR,
UNIT_COST,
ST_FLAG,
ST_DATE,
ST,
START_QTY,
START_VALUE,
START_ON_ORDER,
HAND,
ORDERED,
COMMITED,
SALES,
RECEIVE,
VALUED,
ID_1,
ID_2,
ID_3,
UNIT_PRICE,
EFFECTIVE_DATE,
STATUS,
STATUS_DATE,
STATUS_REASON)
VALUES (IDDATA
,store
,SalesNo
,UnitCost
,StWac
,StDt
,St
,StartQty
,StartValue
,StartOnOrder
,Hand
,Ordered
,COMMITED
,Sales
,Rec
,Valued
,Id1
,Id2
,Id3
,UnitPrice
,to_Date(EffectiveDate||' '||EffectiveTime, 'YYYY-MM-DD HH24:MI:SS')
,'N'
,SYSDATE
,'XML PROCESS INSERT')
-- This is to parse and fetch the data from XML
SELECT E.* FROM XMLTABLE('rp_send/rp_sendRow' PASSING xDATA COLUMNS
store VARCHAR(20) PATH 'store'
,SalesNo VARCHAR(20) PATH 'sales'
,UnitCost NUMBER PATH 'cost'
,StWac VARCHAR(20) PATH 'flag'
,StDt DATE PATH 'st-dt'
,St NUMBER PATH 'st'
,StartQty NUMBER PATH 'qty'
,StartValue NUMBER PATH 'value'
,StartOnOrder NUMBER PATH 'start-on-order'
,Hand NUMBER PATH 'hand'
,Ordered NUMBER PATH 'order'
,Commited NUMBER PATH 'commit'
,Sales NUMBER PATH 'sales'
,Rec NUMBER PATH 'rec'
,Valued NUMBER PATH 'val'
,Id1 VARCHAR(30) PATH 'id-1'
,Id2 VARCHAR(30) PATH 'id-2'
,Id3 VARCHAR(30) PATH 'id-3'
,UnitPrice NUMBER PATH 'unit-pr'
,EffectiveDate VARCHAR(30) PATH 'eff-dt'
,EffectiveTime VARCHAR(30) PATH 'eff-tm'
) E;
ROW_COUNT := SQL%ROWCOUNT; -- Not the # of all the rows inserted.
PROCESS_STATUS := STATUS.PROCESSED;
IF ROW_COUNT < 1 THEN -- The insert failed Row Count = 0 No exception thrown
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := 'ERROR Did not insert into Pos Inventory. Reason Unknown';
END IF;
EXCEPTION
WHEN OTHERS THEN
ROW_COUNT := 0;
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := 'SqlCode:' || SQLCODE || ' SqlErrMsg:' || SQLERRM;
END;
BEGIN
DBMS_OUTPUT.enable(NULL);
FOR A_NewStage IN NewStage
LOOP
READ_COUNT := READ_COUNT + 1;
STAGE_TBL_PROCESS(A_NewStage.ID, A_NewStage.XML_DOCUMENT, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
INSERT_COUNT_TOTAL := INSERT_COUNT_TOTAL + ROW_COUNT;
IF(ROW_COUNT <= 0 OR PROCESS_STATUS = STATUS.ERROR) THEN
ERROR_COUNT := ERROR_COUNT + 1;
UPDATE STAGE_TBL
SET status = PROCESS_STATUS,
status_DATE = SYSDATE,
status_DESCRIPTION = STATUS_DESCRIPTION
WHERE ID = A_NewStage.ID;
ELSE
UPDATE STAGE_TBL
SET status = PROCESS_STATUS,
status_DATE = SYSDATE,
status_DESCRIPTION = STATUS_DESCRIPTION,
SHRED_DT = SYSDATE
WHERE ID = A_NewStage.ID;
END IF;
COMMIT;
END LOOP;
COMMIT;
IF ERROR_COUNT > 0 THEN
ERRMSG := '** ERROR: ' || ERROR_COUNT || ' Stage records did not insert in to the Processed table correctly';
RAISE_APPLICATION_ERROR(-20001,ErrMsg);
END IF;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END ;
My XML with just one rp_sendRow element, it can go upto 2000 rp_sendRow elements
<?xml version = \"1.0\" encoding = \"UTF-8\"?>
<rp_send xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">
<rp_sendRow>
<store>0123</store>
<sales>022399190</sales>
<cost>0.01</cost>
<flag>true</flag>
<st-dt>2013-04-19</st-dt>
<st>146.51</st>
<qty>13.0</qty>
<value>0.0</value>
<start-on-order>0.0</start-on-order>
<hand>0.0</hand>
<order>0.0</order>
<commit>0.0</commit>
<sales>0.0</sales>
<rec>0.0</rec>
<val>0.0</val>
<id-1/>
<id-2/>
<id-3/>
<unit-pr>13.0</unit-pr>
<eff-dt>2015-06-16</eff-dt>
<eff-tm>09:12:21</eff-tm>
</rp_sendRow>
</rp_send>The issue I see is when I pass an XML object to a procedure (STAGE_TBL_PROCESS) to analyze its takes about 10 seconds per XML, but rather than from XML if I directly pick up table in the procedure (STAGE_TBL_PROCESS) passing the ID to be about 0.15 seconds.
In version 11.1, Oracle introduced a new model of storage for the data type XMLType called XML binary.
Binary XML become the default in 11.2.0.2, to disparage the old storage based on CLOB.
Binary XML is a format optimized after analysis for the storage and treatment of the XQuery.
When an XQuery expression is evaluated (through for example XMLTABLE) on an XMLType column stored as binary XML, Oracle can use an ongoing evaluation of XPath that surpasses the query even crushed a transitional XMLType of several order of magnitude.
You can see that in the action plan of the explain command:
SQL> SELECT E.* 2 FROM stage_tbl t 3 , XMLTABLE('rp_send/rp_sendRow' PASSING t.xml_document 4 COLUMNS store VARCHAR(20) PATH 'store' 5 , SalesNo VARCHAR(20) PATH 'sales' 6 , UnitCost NUMBER PATH 'cost' 7 ) E ; Execution Plan ---------------------------------------------------------- Plan hash value: 1134903869 -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2008 | 32 (0)| 00:00:01 | | 1 | NESTED LOOPS | | 1 | 2008 | 32 (0)| 00:00:01 | | 2 | TABLE ACCESS FULL| STAGE_TBL | 1 | 2002 | 3 (0)| 00:00:01 | | 3 | XPATH EVALUATION | | | | | | --------------------------------------------------------------------------------
When the query is executed on a passenger XMLType (for example, a parameter, or a PL/SQL variable), Oracle cannot run the binary model and use a functional assessment based on memory of the XML DOM-like representation.
You can see that in the plan to explain it by spoting a 'COLLECTION ITERATOR PICKLER FETCH' operation.
So what explains the difference (in your version) between treatment from a column of XMLType (stored in binary XML format) or a variable or a parameter.
From 11.2.0.4 and beyond, things have changed a bit with Oracle, introducing a new transitional level of optimization on XMLType.
The plan of the explain command will show a "XMLTABLE ASSESSMENT' in this case.
-
LEFT JOIN increases the number of lines
Hi guys,.
I had a problem, my left join retrieves multiple values. I know he has only 252 in there that correspond to the place where
condition. If I use the table in a left join with the same condition where my row count increases.
-1176 lines
Select count (erg_ID) of
MySchema. T_STA_ERG sta_erg
INNER JOIN T_MEN hoechst
ON sta_erg. PARAMETER = hoechst. PARAMETER
AND sta_erg. JAHR = 2014
where sta_erg. MESSERG_KNG = 'A' AND sta_erg. MESSERG_ALPHA IN ('03 ") and sta_erg. NORM_MESS is null
-252 lines
Select distinct erg_ID myschema. T_STA_ERG sta_erg where sta_erg. MESSERG_KNG = 'A' AND sta_erg. MESSERG_ALPHA IN ('03 ") and sta_erg. NORM_MESS is null
any clue´s how I can build in conditions in my join which would not increase the results of the line?
Why not just an inner join then?
-
CREATE TABLE LUKE
(Department_nn varchar (12),)
Number of Emp_idd (12) not null
);
insert into values Luke ("accounting", 11);
insert into values Luke ('Sales', 00);
insert into values of Luke (TI", 22);
DECLARE
CURSOR cur_luc is
Select
Department_nn,
Emp_idd
Luke;
My_cur_luc cur_luc % ROWTYPE;
BEGIN
Open cur_luc;
LOOP
extraction cur_luc
in My_cur_luc;
DBMS_OUTPUT. Put_line ('Emp ID: ' |) My_cur_luc.emp_idd);
dbms_output.put_line ('Row count: ' |) My_cur_luc % number of lines);
END LOOP;
/ * IF cur_luc % isopen THEN
close cur_luc;
END IF; */
END;
Thank you very much, I just figured out the problem, I have not called on my cursor in dbms_output.putline(cur_luc%rowcount). I called the recordname My_cur_luc instead.
Maybe you are looking for
-
Creator of recovery disk will not display files
I just bought a NB200 and trying to make recovery disks. When I load the Creator program of recovery disk, under disk selection, it shows no file or options for CD/DVD and the button create in the lower part is grayed out. Can anyone suggest what I a
-
Upgrade to the Photos and all the missing iPhotos Albums
Now Albums only the event list. I got all the photos arranged through discs including several smart albums. How can I recover my albums? Please notify. Thank you.
-
Wire terminal in diagram of Sub - VI Analyzer
When I run the VI Analyzer I get the following results Class wired in diagram of sub- That terminal control 'path of the log file' does not reside on the upper level diagram. To avoid memory copies useless, control and indicator of the terminals wh
-
Speakers have stopped producing sound
Original title: speakers I have a HP pavilion dv6 laptop and my internal speakers stopped working (was not great to begin with) if I plug in headphones sound is fine... I don't want to restore... help
-
Original title: XP updates question I can't get Microsoft to provide windows XP updates of any sort of their website (updates.microsoft.com), as they did before. Will my PC Download or even alert for new updates as he did (despite the download of aut