Commit a single table
How engage us only one table in a session without validation other tables.COMMIT is not specific table. It's operations. If you perform DML operations on multiple tables in a single transaction and commit to the end, all changes to the table will be committed. You can partially achieve you objective is by marching the DML first on the table that you want to validate, then validation and DML for the other tables.
OR by using the PRAGMA AUTONOMOUS TRANSACTION for transactions that don't have that tables you want to commit to.
You can post the actual code to give a better idea of your problem.
Tags: Database
Similar Questions
-
I created a hash cluster single table like this:
create tablespace mssm datafile 'c:\app\mssm01.dbf' size 100 m
Segment space management manual;
create the cluster hash_cluster_4k
(id number (2))
size 8192 single hash table is id hashkeys 4 tablespace mssm;
--Also created a table cluster with the line size such as single record corresponds to a block and inserted 5 records each with a separate key value
CREATE TABLE hash_cluster_tab_8k
(number (2) id,)
txt1 tank (2000).
txt2 tank (2000).
tank (2000) txt3
)
CLUSTER hash_cluster_8k (id);
Begin
because loop me in 1.5
Insert in the values of hash_cluster_tab_8k (i, 'x', 'x', 'x');
end loop;
end;
/
exec dbms_stats.gather_table_stats (WATERFALL of the USER 'HASH_CLUSTER_TAB_8K' = > true);
Now, if I try to access the folder with id = 1 - it shows 2 I / O (cr = 2) instead of the single e/s as provided in a hash cluster.
Rows Row Source operation
------- ---------------------------------------------------
1 ACCESS HASH_CLUSTER_TAB_8K HASH TABLE (cr = 2 pr = 0 pw = time 0 = 0 US)
If I run the query, even after the creation of a unique index on hash_cluster_tab (id), the execution plan specifies access hash and single e/s (cr = 1).
This means that for a single e/s in a single table hash cluster, we create a unique index? It will not create an additional burden to maintain an index?
What is the second I/o necessary for in the case where a unique index is absent?
I would be very grateful if gurus could explain this behavior.
Thanks in advance...user12288492 wrote:
I ran the query with all 5 id values and the results have been more confusing.During the first inning, I had VC = 2 for two values of keys, the Czech Republic rest = 1
During the second inning, I = 2 for a key value, the Czech Republic cr rest = 1
In the third set, I had VC = 1 for all values of keysThe effects vary depending on the number of previous runs and the number of times you reconnect.
The extra CR is a cleansing of the block effect. If you check the access of the buffer (events 10200-10203), then you can see the details. Simplistically, if you create your data, then connect and interrogate one of the lines (but not id = 5, because that will be cleaned on the collection of statistics) you should be able to see the following numbers:cleanouts only - consistent read gets 1 immediate (CR) block cleanout applications 1 commit txn count during cleanout 1 cleanout - number of ktugct calls 1 Commit SCN cached 1 redo entries 1 redo size 80
On the first block visited, Oracle made a visit to buffer to untangle a YVERT cleaning (ktugct - get the commit time). This cleans up the block and caches the acquired RCS. The rest of the blocks that visit you in the same session should not be cleaned because the session can use the updated SNA caching to avoid needing a cleanup operation. Finally all the blocks will have been cleaned up (which means that they will be written to the disk) and the extra CR stops happening.
There is a little quirk - drain plug seems to apply to the format block calls - and I do not understand why this was did not each row inserted thereafter.
Concerning
Jonathan Lewis -
Dreamweaver several graphs in single table cell
Dreamweaver is having problems when I try to combine multiple charts in a single table cell. Some are separated by two end just "" entries, while others do not accept these separators. Pleas you show me the right way to handle this.
Jack,
- Tables should not be used for web page layouts.
- Tables for tabular data such as spreadsheets and graphics only.
- Today, we use an external CSS file for the layout, typography and other styles.
Please show us what you are trying to do by copying and sticky code in a reply from the web forum.
Nancy O.
-
How to restore a single table from a DP Export to a different pattern?
Environment:
Oracle 11.2.0.3 EE on Solaris
I was looking at documentation when importing DP trying to find the correct syntax to import a single table of an RFP to a different schema export.
So I want to load the table User1. Table1 in USER2. Table1 a DP Export.
Looking at the REMAP_TABLE options:
I can't see where to specify the name of the target schema. Examples were the new table name residing in the same pattern with just a new name.REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename OR REMAP_TABLE=[schema.]old_tablename[:partition]:new_tablename
I looked at the REMAP_SCHEMA but the docs say matter the entire schema in the new schema and I want only one 1 table.
All suggestions are welcome!
-garyI thought I tried this combination and it seemed to me that the REMAP_SCHEMA somehow too rolled TABLES = parameter > and started to load all the objects.
If it does not fail (and it shouldn't) then please post details here and I'll see what happens.
Let me back in the tray to sand and try again. I admit I was a bit of a rush when I did the first time.
We are all in a hurry, no worries. If it fails, please post the details and the log file.
Does make any sense that a parameter would be substitute another?
No, this should never happen. We have tons of audits to ensure that labour cannot have several meanings. For example, you can not say
Full schemas = y = foo - what you want, or a full export list export schema, etc.
Your suggestion was the first thing that I thought would work.
It should work. If this isn't the case, send the log with the command and the results file.
Dean
Thanks again for the help and stay tuned for my new attempt.-gary
-
multiple to single table replication
Hello
Is there a way we can replicate multiple tables at source on a single table to the target.
example: join table A and B on the source and put it in the table C.
Thank you!Yes it is possible. OGG can be used to get the data from the tables of the source and go in a single target table. But keep in mind if you are data fusion then PK conflict should be avoided otherwise, you can get any data problems.
-Koko
-
Repetition of groups nested in a single table - model RTF
Hi all
I have a little problem with RTF models. I try to use 2 recurring groups within a single table, but everything I'm not get data for the fields of the outer loop, or still getting only one record of the loop internal.
It is a model of report of cash requirements. My expandable outside group is G_VENDOR and internal is G_INVOICE. I'm at the stage where I pasted the table with the G_INVOICE details in another table (with the NAME of the SELLER in the first field). This has however a drawback - it is not to repeat the name of the seller if there is more than one G_INVOICE in G_VENDOR. I don't want tables repeated for each provider, one with all the data.
I had SR Oracle open, but they seem not to be very useful, makes me think it is a bug and not fixed will never be. I know that the XML flatenning would be an option, but I don't want be to redevelop all alone I need template for reports.
Someone has an idea?
Concerning
PiotrHi Piotr,
Ideally you would be that flatten, but if you are inside the loop of invoice you can still access the fields of the outer loop by changing the form field and by prefixing with... /.
for example becomes.. endor_name?=""?>
Kind regards
Robert
-
How to combine the large number of tables of pair key / value in a single table?
I have a pair key / value tables of 250 + with the following features
(1) keys are unique within a table but may or may not be unique in the set of tables
(2) each table has about 2 million lines
What is the best way to create a single table with all unique key-values of all these paintings? The following two queries work up to about 150 + tables
with t1 as ( select 1 as key, 'a1' as val from dual union all select 2 as key, 'a1' as val from dual union all select 3 as key, 'a2' as val from dual ) , t2 as ( select 2 as key, 'b1' as val from dual union all select 3 as key, 'b2' as val from dual union all select 4 as key, 'b3' as val from dual ) , t3 as ( select 1 as key, 'c1' as val from dual union all select 3 as key, 'c1' as val from dual union all select 5 as key, 'c2' as val from dual ) select coalesce(t1.key, t2.key, t3.key) as key , max(t1.val) as val1 , max(t2.val) as val2 , max(t3.val) as val3 from t1 full join t2 on ( t1.key = t2.key ) full join t3 on ( t2.key = t3.key ) group by coalesce(t1.key, t2.key, t3.key) / with master as ( select rownum as key from dual connect by level <= 5 ) , t1 as ( select 1 as key, 'a1' as val from dual union all select 2 as key, 'a1' as val from dual union all select 3 as key, 'a2' as val from dual ) , t2 as ( select 2 as key, 'b1' as val from dual union all select 3 as key, 'b2' as val from dual union all select 4 as key, 'b3' as val from dual ) , t3 as ( select 1 as key, 'c1' as val from dual union all select 3 as key, 'c1' as val from dual union all select 5 as key, 'c2' as val from dual ) select m.key as key , t1.val as val1 , t2.val as val2 , t3.val as val3 from master m left join t1 on ( t1.key = m.key ) left join t2 on ( t2.key = m.key ) left join t3 on ( t3.key = m.key ) /
A couple of questions, then a possible solution.
Why the hell you have 250 + tables pair key / value?
Why the hell you want to group them in a table containing one row per key?
You could do a pivot of all the tables, not part. something like:
with t1 as ( select 1 as key, 'a1' as val from dual union all select 2 as key, 'a1' as val from dual union all select 3 as key, 'a2' as val from dual ) , t2 as ( select 2 as key, 'b1' as val from dual union all select 3 as key, 'b2' as val from dual union all select 4 as key, 'b3' as val from dual ) , t3 as ( select 1 as key, 'c1' as val from dual union all select 3 as key, 'c1' as val from dual union all select 5 as key, 'c2' as val from dual ) select key, max(t1val), max(t2val), max(t3val) FROM (select key, val t1val, null t2val, null t3val from t1 union all select key, null, val, null from t2 union all select key, null, null, val from t3) group by key
If you can do it in a single query, Union all 250 + tables, you don't need to worry about chaining or migration. It may be necessary to do this in a few passes, depending on the resources available on your server. If so, I would be inclined to first create the table, with a larger than normal free percent, making the first game as a right inset and other pass or past as a merger.
Another solution might be to use the approach above, but limit the range of keys with each pass. So pass we would have a like predicate when the key between 1 and 10 in every branch of the union, pass 2 would have key between 11 and 20, etc. In this way, everything would be straight inserts.
That said, I'm going back to my second question above, why the hell you want or need to do that? What is the company you want to solve. There could be a much better way to meet the requirement.
John
-
Hi all
I am using the command merge onto a single table. I want to check some values in the table, if they already exist I just update, thing that I want to insert.
For this I use the following code:
MERGE INTO my_table OLD_VAL
NEW_VAL in ASSISTANCE from (SELECT L_field1, L_field2, L_field3, DOUBLE L_field4)
WE (OLD_VAL.field1 = NEW_VAL. L_field1
AND OLD_VAL.field2 = NEW_VAL. L_field2
AND OLD_VAL.field3 = NEW_VAL. L_field3
)
WHEN MATCHED THEN
UPDATE SET OLD_VAL.field4 = NEW_VAL. L_field4
WHEN NOT MATCHED THEN
INSERT (Field1, Field2, field3, field4, sphere5)
VALUES (NEW_VAL. L_field1, NEW_VAL. L_field2, NEW_VAL. L_field3, NEW_VAL. L_field4, SYSDATE);
Fields starting with L_ here is my local variables inside my procedure.
It is giving error as ORA-00904: "NEW_VAL. "" L_field3 ": invalid identifier
Thank you all.SELECT L_field1, L_field2, L_field3, DOUBLE L_field4
1. you r select all values here?
2. try to give alias for all columns -
For a single table import failed...
Hello
On the production server, I reported a user db, let's call it PRD_USER, with tablespace PRD_TBL default.
On the development server, I reported a user db, let's call it PRD_USER, with tablespace DEV_TBL default.
On the production server, I use the db of exp utility to import as:
IMP System/Manager of the user PRD_USER touser = PRD_USER = ignore = file_name = «...» ' log = «...» ».
Succeeds the import for about 25 tables and indexes and constraints, but it fails for a single table with the error: {I don't remember the error ORA- and I do not have access currently} DEV_TBL tablespace does not exist.
Of course this tablespace does not exist on production env. But how this problem arises because the default tablespace for the user is not DEV_TBL but PRD_TBL...?
Do you have any idea what can be the cause and how can I overcome this problem when importing...? {Note: I gave a temporary solution... take the table creation sql script leaving aside the reference of the tablespace "DEV_TBL"}.
The two servers work in exactly the same version of DB...
Note: I use DB 10g v.2
Thank you
SIMIf the table has Partitions, import strives to create the Partitions (in the CREATE TABLE statement) on the original table space.
OR there is a LOB segment in the table import strives to create on the original table space.
Hemant K Collette
-
sort numbers separated by a comma in a table
The attached VI should accept two numbers, separated by commas, and then use the first number to set a Boolean value in a table. Then the next number must set the corresponding location of the array with the value.
For example, for a table of 5 elements:
User input is 3, 2. The VI will produce the following two tables:
F 0
F 0
T 2
F 0
F 0
-
Value separated by commas in a table column to get each field separtely?
Hello
I have the table that a column has values separated by commas in it. This table is populated using SQL LOADER, which is staging table.
I need to retrieve the records of these values separated by commas.
format of. CSV file is as -
A separate file of pipes.
DHCP-1-1-1. WNLB-CMTS-01-1,WNLB-CMTS-02-2|
DHCP-1-1-2. WNLB-CMTS-03-3,WNLB-CMTS-04-4,WNLB-CMTS-05-5|
DHCP-1-1-3. WNLB-CMTS-01-1.
DHCP-1-1-4. WNLB-CMTS-05-8,WNLB-CMTS-05-6,WNLB-CMTS-05-0,WNLB-CMTS-03-3|
DHCP-1-1-5 | WNLB-CMTS-02-2,WNLB-CMTS-04-4,WNLB-CMTS-05-7|
CREATE TABLE link_data (dhcp_token VARCHAR2 (30), cmts_to_add VARCHAR2 (200), cmts_to_remove VARCHAR2 (200));
insert into link_data values ('dhcp-1-1-1','wnlb-cmts-01-1,wnlb-cmts-02-2',null);
insert into link_data values ('dhcp-1-1-2','wnlb-cmts-03-3,wnlb-cmts-04-4,wnlb-cmts-05-5',null);
insert into link_data values ('dhcp-1-1-3','wnlb-cmts-01-1',null);
insert into link_data values ('dhcp-1-1-4','wnlb-cmts-05-8,wnlb-cmts-05-6,wnlb-cmts-05-0,wnlb-cmts-03-3',null);
insert into link_data values ('dhcp-1-1-5','wnlb-cmts-02-2,wnlb-cmts-04-4,wnlb-cmts-05-7',null);
Here the cmts_to_add column has comma separted
I need values such as -.
> for wnlb-cmts-01-1,wnlb-cmts-02-2 > > wnlb-CMTS-01-1
> > wnlb-CMTS-02-2
> for wnlb-cmts-03-3,wnlb-cmts-04-4,wnlb-cmts-05-5 > > wnlb-CMTS-03-3
> > wnlb-CMTS-04-4
> > wnlb-CMTS-05-5
And so on...
I do this because it's the staging table and I load data into the main tables using this table.
This second field contain different values as the simple comma-delimited string.
I need to write a PLSQL block to insert into the main table after checking as if dhcp-1-1-1 and wnlb-CMTS-01-1 is present in the main table so not to introduce other insert a new record.
To meet this requirement, I need to get the distinct value of the cmts_to_add column to insert into DB.
the value will be inserted as dhcp-1-1-1_TO_wnlb-cmts-01-1 and dhcp-1-1-1_TO_wnlb-cmts-02-2 for the first row of the array of link_data.
The process will also be same for the rest of the lines.
I use the function substrt and instr for this problem, but its does not work.
declare
cursor c_link is select * from link_data.
l_rec_link link_data % rowtype;
l_dhcp varchar2 (30);
l_cmts varchar2 (20000);
l_cmts_1 varchar2 (32000);
Start
Open c_link;
loop
extract the c_link in l_rec_link;
l_cmts: = l_rec_link.cmts_to_add;
loop
l_cmts_1: = substr (l_cmts, 1, instr(l_cmts,',')-1);
dbms_output.put_line (l_cmts_1);
end loop;
dbms_output.put_line(l_dhcp||) e '|| l_cmts);
When the output c_link % notfound;
end loop;
exception
while others then
Dbms_output.put_line ('ERROR' |) SQLERRM);
end;
Its a peusdo code I write, but it also gives me the wrong answer it gives me error ORA-20000: ORU-10027: buffer overflow, limit of 20000 bytes
I am using-
Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
Please tell me if my problem isn't clear!
Hello
little 'trick': Add a comma at the end of the chain... So it's easier to deal with the fact that there are zero, one, or N components...
CREATE TABLE link_data (dhcp_token VARCHAR2 (30), cmts_to_add VARCHAR2 (200), cmts_to_remove VARCHAR2 (200));
insert into link_data values ('dhcp-1-1-1','wnlb-cmts-01-1,wnlb-cmts-02-2',null);
insert into link_data values ('dhcp-1-1-2','wnlb-cmts-03-3,wnlb-cmts-04-4,wnlb-cmts-05-5',null);
insert into link_data values ('dhcp-1-1-3','wnlb-cmts-01-1',null);
insert into link_data values ('dhcp-1-1-4','wnlb-cmts-05-8,wnlb-cmts-05-6,wnlb-cmts-05-0,wnlb-cmts-03-3',null);
insert into link_data values ('dhcp-1-1-5','wnlb-cmts-02-2,wnlb-cmts-04-4,wnlb-cmts-05-7',null);
COMMIT;SET SERVEROUT ON
DECLARE
l_cmts VARCHAR2 (200 CHAR);
l_cmts_1 VARCHAR2 (200 CHAR);
BEGIN
FOR r IN (SELECT dhcp_token, cmts_to_add |) ',' cmts
OF link_data
)
LOOP
l_cmts: = r.cmts;
l_cmts_1: = SUBSTR (l_cmts, 1, INSTR (l_cmts, ",") - 1);
While l_cmts_1 IS NOT NULL
LOOP
DBMS_OUTPUT. Put_line (r.dhcp_token |) '|' || l_cmts_1);
l_cmts: = SUBSTR (l_cmts, INSTR (l_cmts, ",") + 1);
l_cmts_1: = SUBSTR (l_cmts, 1, INSTR (l_cmts, ",") - 1);
END LOOP;
END LOOP;
END;
/
DHCP-1-1-1. WNLB-CMTS-01-1
DHCP-1-1-1. WNLB-CMTS-02-2
DHCP-1-1-2. WNLB-CMTS-03-3
DHCP-1-1-2. WNLB-CMTS-04-4
DHCP-1-1-2. WNLB-CMTS-05-5
DHCP-1-1-3. WNLB-CMTS-01-1
DHCP-1-1-4. WNLB-CMTS-05-8
DHCP-1-1-4. WNLB-CMTS-05-6
DHCP-1-1-4. WNLB-CMTS-05-0
DHCP-1-1-4. WNLB-CMTS-03-3
DHCP-1-1-5 | WNLB-CMTS-02-2
DHCP-1-1-5 | WNLB-CMTS-04-4
DHCP-1-1-5 | WNLB-CMTS-05-7Best regards
Bruno Vroman.
-
Multiple values in a single Table cell
Hello
I have a requirement of the customer. I need to show multiple values within a table cell
Example of
Location City Shop North City A HS-200, SH-210, SH310 South City B SH - 100, SH341 East City C SH-20 But my table shows repeating cell as follows.
Location City Shop North City A SH-200 North City A SH-210 North City A SH310 South City B SH-100 South City B SH341 East City C SH-20 So I need your help to show repeated STORE name in a single column of the Table.
Thank you
Try this
EVALUATE_AGGR ('LISTAGG (%1, %2) within THE GROUP (ORDER BY DESC %3)', TableName.ColumnName, ',', TableName.ColumnName)
-
Commit performance on table with Fast Refresh MV
Hello world
Try to wrap your head around fast refresh performance and why I see (what I consider) high disk numbers / query associated with the update of the MV_LOG in a TKPROF.
The installation program.
(Oracle 10.2.0.4.0)
Database table:
MV was createdSQL> desc action; Name Null? Type ----------------------------------------- -------- ---------------------------- PK_ACTION_ID NOT NULL NUMBER(10) CATEGORY VARCHAR2(20) INT_DESCRIPTION VARCHAR2(4000) EXT_DESCRIPTION VARCHAR2(4000) ACTION_TITLE NOT NULL VARCHAR2(400) CALL_DURATION VARCHAR2(6) DATE_OPENED NOT NULL DATE CONTRACT VARCHAR2(100) SOFTWARE_SUMMARY VARCHAR2(2000) MACHINE_NAME VARCHAR2(25) BILLING_STATUS VARCHAR2(15) ACTION_NUMBER NUMBER(3) THIRD_PARTY_NAME VARCHAR2(25) MAILED_TO VARCHAR2(400) FK_CONTACT_ID NUMBER(10) FK_EMPLOYEE_ID NOT NULL NUMBER(10) FK_ISSUE_ID NOT NULL NUMBER(10) STATUS VARCHAR2(80) PRIORITY NUMBER(1) EMAILED_CUSTOMER TIMESTAMP(6) WITH LOCAL TIME ZONE SQL> select count(*) from action; COUNT(*) ---------- 1388780
Fast refresh works fine. And the newspaper is kept small enough.create materialized view log on action with sequence, rowid (pk_action_id, fk_issue_id, date_opened) including new values; -- Create materialized view create materialized view issue_open_mv build immediate refresh fast on commit enable query rewrite as select fk_issue_id issue_id, count(*) cnt, min(date_opened) issue_open, max(date_opened) last_action_date, min(pk_action_id) first_action_id, max(pk_action_id) last_action_id, count(pk_action_id) num_actions from action group by fk_issue_id; exec dbms_stats.gather_table_stats('tg','issue_open_mv') SQL> select table_name, last_analyzed from dba_tables where table_name = 'ISSUE_OPEN_MV'; TABLE_NAME LAST_ANAL ------------------------------ --------- ISSUE_OPEN_MV 15-NOV-10 *note: table was created a couple of days ago * SQL> exec dbms_mview.explain_mview('TG.ISSUE_OPEN_MV'); CAPABILITY_NAME P REL_TEXT MSGTXT ------------------------------ - -------- ------------------------------------------------------------ PCT N REFRESH_COMPLETE Y REFRESH_FAST Y REWRITE Y PCT_TABLE N ACTION relation is not a partitioned table REFRESH_FAST_AFTER_INSERT Y REFRESH_FAST_AFTER_ANY_DML Y REFRESH_FAST_PCT N PCT is not possible on any of the detail tables in the mater REWRITE_FULL_TEXT_MATCH Y REWRITE_PARTIAL_TEXT_MATCH Y REWRITE_GENERAL Y REWRITE_PCT N general rewrite is not possible or PCT is not possible on an PCT_TABLE_REWRITE N ACTION relation is not a partitioned table 13 rows selected.
When I update a row in the base table:SQL> select count(*) from mlog$_action; COUNT(*) ---------- 0
What follows, I get via tkprof.var in_action_id number; exec :in_action_id := 398385; UPDATE action SET emailed_customer = SYSTIMESTAMP WHERE pk_action_id = :in_action_id AND DECODE(emailed_customer, NULL, 0, 1) = 0 / commit;
Could someone explain why the the SELECT/UPDATE/DELETE on MLOG$ _ACTION if 'expensive' when it should be only 2 rows (the old value and the new value) in this newspaper after an update? I could do to improve the performance of the update?******************************************************************************** INSERT /*+ IDX(0) */ INTO "TG"."MLOG$_ACTION" (dmltype$$,old_new$$,snaptime$$, change_vector$$,sequence$$,m_row$$,"PK_ACTION_ID","DATE_OPENED", "FK_ISSUE_ID") VALUES (:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c, sys.cdc_rsid_seq$.nextval,:m,:1,:2,:3) call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.01 0 0 0 0 Execute 2 0.00 0.03 4 4 4 2 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 3 0.00 0.04 4 4 4 2 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Rows Row Source Operation ------- --------------------------------------------------- 2 SEQUENCE CDC_RSID_SEQ$ (cr=0 pr=0 pw=0 time=28 us) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file sequential read 4 0.01 0.01 ******************************************************************************** ******************************************************************************** update "TG"."MLOG$_ACTION" set snaptime$$ = :1 where snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS') call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.01 0 0 0 0 Execute 1 0.94 5.36 55996 56012 1 2 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 0.94 5.38 55996 56012 1 2 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Rows Row Source Operation ------- --------------------------------------------------- 0 UPDATE MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us) 2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file scattered read 3529 0.02 4.91 ******************************************************************************** select dmltype$$, max(snaptime$$) from "TG"."MLOG$_ACTION" where snaptime$$ <= :1 group by dmltype$$ call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.70 0.68 55996 56012 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.70 0.68 55996 56012 0 1 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Rows Row Source Operation ------- --------------------------------------------------- 1 SORT GROUP BY (cr=56012 pr=55996 pw=0 time=685671 us) 2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=1851 us) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file scattered read 3529 0.00 0.38 ******************************************************************************** delete from "TG"."MLOG$_ACTION" where snaptime$$ <= :1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.71 0.70 55946 56012 3 2 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 0.71 0.70 55946 56012 3 2 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: CHOOSE Parsing user id: SYS (recursive depth: 1) Rows Row Source Operation ------- --------------------------------------------------- 0 DELETE MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=702813 us) 2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=1814 us) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file scattered read 3530 0.00 0.39 db file sequential read 33 0.00 0.00 ********************************************************************************
Let me know if you need more info... would be happy to provide.My guess would be that you were once a very large transaction that inserted a large number of rows in this table. So the table segment is big enough now and the high watermark is average at the end of this segment, causing a full scan table to analyze a large number of empty blocks and retrieve the two lines.
You can issue a truncation on this table of $ MLOG: which would free up the empty blocks and brings back the high-watermark in the first block.
-
Values separated by a nested as comma from string table
I would insert the contents of a table nested in the form of comma-separated values in a string.
For example, I created:
CREATE TYPE u_results AS TABLE OF VARCHAR2 (10);
/
CREATE TABLE example_table)
number of obj_id,
obj_results u_results)
NESTED TABLE obj_results STORE AS obj_results_t;
INSERT INTO example_table (obj_id, obj_results) VALUES (1, u_results ('OK', 'NOK', 'NN'));
INSERT INTO example_table (obj_id, obj_results) VALUES (2, u_results ('OK', 'NOK'));
CREATE TABLE example_table2)
number of obj_id,
obj_results2 VARCHAR2 (100));
So, in the example_table2 table I would have obj_results values in obj_results2 as string separated by commas.
for example
OBJ_ID obj_results2
1 OK, NOK, NN
2 OK, NOK
Any ideas? Thank you
G.SQL> create type u_results as table of varchar2 (10); / Type created. SQL> select rtrim (xmlagg (xmlelement (e, column_value || ',')).extract ('//text()'), ',') csv from table (u_results ('AA', 'BB')) t / CSV ---------- AA,BB 1 row selected. SQL> drop type u_results / Type dropped.
-
Problem using the list separated by commas with nested table element
Hello
I have a list separated by commas like this:
And want to create a function that creates a where clause clause for each element with an output like this:H23004,H24005,T7231,T8231,T9231
Here's my test function that is not working properly:UPPER('H23004') IN (UPPER(charge)) OR UPPER('H23005') IN (UPPER(charge)) OR UPPER('T7231') IN (UPPER(charge)) OR UPPER('T8231') IN (UPPER(charge)) OR UPPER('T9231') IN (UPPER(charge))
The out put looks like this:create or replace function FNC_LIST_TO_WHERE_CLAUSE(v_list in VARCHAR2) return varchar2 is -- declaration of list type TYPE batch_type IS TABLE OF pr_stamm.charge%TYPE; -- variable for Batches v_batch batch_type := batch_type('''' || replace(v_list,',',''',''') || ''''); return_script varchar2(1000); BEGIN -- loop as long as there are objects left FOR i IN v_batch.FIRST .. v_batch.LAST LOOP --DBMS_OUTPUT.PUT_LINE(offices(i)); -- create where clause IF i = 1 THEN return_script := 'UPPER(' || v_batch(i) || ') IN (UPPER(charge))'; ELSE return_script := return_script || ' OR UPPER(' || v_batch(i) || ') IN (UPPER(charge))'; END IF; END LOOP; return (return_script); end;
I don't know what I did wrong? It calculates the amount of the incorrect array element! (v_batch. Must be 5)UPPER('H23004','H24005','T7231','T8231','T9231') IN (UPPER(charge))
v_batch. FIRST = 1
v_batch. LAST = 1
Kind regards
TobiasTry this...
declare
text varchar2 (1000): = "H23004, H24005, T7231, T8231, T9231;
v_where varchar2 (1000);
Start
Text: = text | «, » ;
While instr (text, ',') <> 0
loop
v_where: = v_where | ' UPPER ("': substr (Text, 1, InStr(Text,',',1)-1) |") ' IN (UPPER (load)) OR ';
text: = substr (text, instr(text,',',1) + 1);
end loop;
v_where: = substr (v_where, 1, length (v_where)-3);
dbms_output.put_line (v_where);
end;convert it to function...
Maybe you are looking for
-
Mirror screen Macbook Pro, iMac 21.5
I want to use my iMac 21.5 inches (end of 2015) as a screen of my Macbook Pro retina (2015), but were not able to do so via a cable thunderbolt. Can someone help me please?
-
Qosmio X 870-13 L - black screen after upgrading Ram
Hello This is the 2nd time that I buy ram to upgrade my PC and it does not work. My PC has 2 theses ram already installed when I bought, and they are 4 GB each: "PC-12800 DDR3 800 MHz 11-11-11-28 M471B5273CHO-CKO samsun. I did some research before bu
-
Satellite A200 (PSAECE) function button problem
I instaled win vista ultimate instead Home premium. My question is which driver should install to activate my function buttons?When I move the mouse in high corner nothing happen but when I have Fn + F6 brightness changes.
-
iPhone 6s does not connect to wifi
my iPhone 6 s does not connect to the wifi at home. He sees the network and accepts the password. My other apple devices will connect. I have renewed the lease, rest the network with no luck. Any suggestions?
-
Questions about pushing of BDS BB10
My setup is as follows: -a simple java application, old half a decade, calling this url with POST. new URL("http", "is-hh-bb01", 8080, "/push?DESTINATION=" + pin + "&PORT=" + pushPort + "&REQUESTURI=/"); -l' sample PushCollector 1. I've read here tha