Bulk collect into a Table nested with Extend
Hi allI want to get all the columns of the table emp and dept. So I use bulk collect into the concept of nested table.
*) I wrote the function in three different ways. EX: 1 and 2 (DM_NESTTAB_BULKCOLLECT_1 & DM_NESTTAB_BULKCOLLECT_2) does not give the desired result.
*) It only gives the columns of the EMP table. That means it takes DEPT & columns of the EMP table, but it only gives columns of table EMP.
) I think, there is something problem with nested table Extend.
) I want to know infested.
Can we use bulk collect into a table nested with extend?
If it is yes then fix the below codes (EX: 1 & EX: 2) and can you explain me please?
Codes are given below *.
CREATE OR REPLACE TYPE NEST_TAB IS TABLE OF THE VARCHAR2 (1000);
EX: 1:
----
-Bulk collect into a Table nested with Extend-
CREATE or replace FUNCTION DM_NESTTAB_BULKCOLLECT_1
RETURN NEST_TAB
AS
l_nesttab NEST_TAB: = NEST_TAB();
BEGIN
FOR tab_rec IN (SELECT table_name
From user_tables
WHERE table_name IN ('EMP', 'Department')) LOOP
l_nesttab.extend;
SELECT column_name
bulk collect INTO l_nesttab
Of user_tab_columns
WHERE table_name = tab_rec.table_name
ORDER BY column_id;
END LOOP;
RETURN l_nesttab;
EXCEPTION
WHILE OTHERS THEN
LIFT;
END DM_NESTTAB_BULKCOLLECT_1;
SELECT *.
TABLE (DM_NESTTAB_BULKCOLLECT_1);
OUTPUT:
-------
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO
* Only the EMP table columns are there in the nested table.
-----------------------------------------------------------------------------------------------------
EX: 2:
-----
-Bulk collect in the nested with Extend based on County - Table
CREATE or replace FUNCTION DM_NESTTAB_BULKCOLLECT_2
RETURN NEST_TAB
AS
l_nesttab NEST_TAB: = NEST_TAB();
v_col_cnt NUMBER;
BEGIN
FOR tab_rec IN (SELECT table_name
From user_tables
WHERE table_name IN ('EMP', 'Department')) LOOP
SELECT MAX (column_id)
IN v_col_cnt
Of user_tab_columns
WHERE table_name = tab_rec.table_name;
l_nesttab. Extend (v_col_cnt);
SELECT column_name
bulk collect INTO l_nesttab
Of user_tab_columns
WHERE table_name = tab_rec.table_name
ORDER BY column_id;
END LOOP;
RETURN l_nesttab;
EXCEPTION
WHILE OTHERS THEN
LIFT;
END DM_NESTTAB_BULKCOLLECT_2;
SELECT *.
TABLE (DM_NESTTAB_BULKCOLLECT_2);
OUTPUT:
-------
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO
* Only the EMP table columns are there in the nested table.
-------------------------------------------------------------------------------------------
EX: 3:
-----
-Collect in bulk in a nested Table to expand aid for loop.
CREATE or replace FUNCTION DM_NESTTAB_BULKCOLLECT_3
RETURN NEST_TAB
AS
l_nesttab NEST_TAB: = NEST_TAB();
TYPE local_nest_tab
THE VARCHAR2 ARRAY (1000);
l_localnesttab LOCAL_NEST_TAB: = LOCAL_NEST_TAB();
NUMBER x: = 1;
BEGIN
FOR tab_rec IN (SELECT table_name
From user_tables
WHERE table_name IN ('EMP', 'Department')) LOOP
SELECT column_name
bulk collect INTO l_localnesttab
Of user_tab_columns
WHERE table_name = tab_rec.table_name
ORDER BY column_id;
BECAUSE me IN 1.l_localnesttab. COUNTING LOOP
l_nesttab.extend;
L_NESTTAB (x): = L_LOCALNESTTAB (i);
x: = x + 1;
END LOOP;
END LOOP;
RETURN l_nesttab;
EXCEPTION
WHILE OTHERS THEN
LIFT;
END DM_NESTTAB_BULKCOLLECT_3;
SELECT *.
TABLE (DM_NESTTAB_BULKCOLLECT_3);
OUTPUT:
------
DEPTNO
DNAME
LOC
EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO
* Now, I got the desired result set. DEP. and columns of the Emp Table are in the nested Table.
Thank you
Ann
COLLECTION BULK cannot add values to an existing collection. It can only crush.
Tags: Database
Similar Questions
-
Hi all
I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.
We use Oracle 11 g R1.
I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.
I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.
We have about three tables with more than 26 million records.
It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.
I googled on this topic and retrieve the tips:
Use NO LOG
Parallel use
BULK COLLECT INTO limited
However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.
I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.
The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.
I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.
But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)
I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.
Thanks for reading this.
Ann
I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!
A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.
Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption. Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.
CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)
IS
l_memory NUMBER;
BEGIN
SELECT st. VALUE
IN l_memory
SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm
WHERE se.audsid = USERENV ('SESSIONID')
AND st.statistic # nm.statistic = #.
AND themselves. SID = st. SID
AND nm.NAME = 'pga session in memory. "
Dbms_output.put_line (CASE
WHEN context_in IS NULL
THEN NULL
ELSE context_in | ' - '
END
|| 'Used in the session PGA memory ='
|| To_char (l_memory)
);
END show_pga_memory;
DECLARE
PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)
IS
CURSOR source_cur
IS
SELECT *.
FROM YOUR_TABLE;
TYPE source_aat IS TABLE OF source_cur % ROWTYPE
INDEX BY PLS_INTEGER;
l_source source_aat;
l_start PLS_INTEGER;
l_end PLS_INTEGER;
BEGIN
DBMS_SESSION.free_unused_user_memory;
show_pga_memory (limit_in |) "- BEFORE"); "."
l_start: = DBMS_UTILITY.get_cpu_time;
OPEN source_cur.
LOOP
EXTRACTION source_cur
LOOSE COLLECTION l_source LIMITED limit_in;
WHEN l_source EXIT. COUNT = 0;
END LOOP;
CLOSE Source_cur;
l_end: = DBMS_UTILITY.get_cpu_time;
Dbms_output.put_line (' elapsed time CPU for limit of ')
|| limit_in
|| ' = '
|| To_char (l_end - l_start)
);
show_pga_memory (limit_in |) "- AFTER");
END fetch_all_rows;
BEGIN
fetch_all_rows (20000);
fetch_all_rows (40000);
fetch_all_rows (60000);
fetch_all_rows (80000);
fetch_all_rows (100000);
fetch_all_rows (150000);
fetch_all_rows (250000);
-etc.
END;
-
Hi all
I was reading [http://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:5918938803188] of Tom Kyte site on bulk collect within limits. The code uses the % notfound cursor to exit the recovery loop. What I do in this situation is using exists, method of table rather than cursor attribute.
Since when I do this:create or replace procedure p1 is type num_list_type is table of number index by pls_integer; num_list num_list_type; cursor c1 is select temp from test; begin open c1; loop fetch c1 bulk collect into num_list limit 2; if num_list.exists(1)=false then exit; end if; for i in num_list.first..num_list.last loop dbms_output.put_line(num_list(i)); END LOOP; end loop; end;
It will close when the cursor retrieves only less than the limit, leaving a few rows. If the code works properly.exit wen c1%notfound
Question:
1. is the Exit statement properly, is there another way (I'm a little skeptical because I'm not using the cursor)?
2 How to decide on the size limit based on what we have in the hardware settings and oracle? All of the guidelines?
3 - is the best practice when it comes with the cursor and several lines that still use bulk collect into?
Best regards
ValValerie Debonair wrote:
Hi all
I was reading [http://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:5918938803188] of Tom Kyte site on bulk collect within limits. The code uses the % notfound cursor to exit the recovery loop. What I do in this situation is using exists, method of table rather than cursor attribute.create or replace procedure p1 is type num_list_type is table of number index by pls_integer; num_list num_list_type; cursor c1 is select temp from test; begin open c1; loop fetch c1 bulk collect into num_list limit 2; if num_list.exists(1)=false then exit; end if; for i in num_list.first..num_list.last loop dbms_output.put_line(num_list(i)); END LOOP; end loop; end;
Since when I do this:
exit wen c1%notfound
It will close when the cursor retrieves only less than the limit, leaving a few rows. If the code works properly.
Question:
1. is the Exit statement properly, is there another way (I'm a little skeptical because I'm not using the cursor)?
2 How to decide on the size limit based on what we have in the hardware settings and oracle? All of the guidelines?
3 - is the best practice when it comes with the cursor and several lines that still use bulk collect into?Best regards
ValHello
1. Yes, in above code statement EXIT will work correctly. The other way is % NOTFOUND CURSOR attribute usage.
2. There is no precise way to decide the size limit.
3 depends on number of records. If you have a lot of records always use LIMIT that improves performance.After reading the link that was posted by Blu, I am to edit this post.
1. do not use CURSOR attribute when you use collections within a cursor. Use methods of collection such as COUNT or EXIST etc.
@Blu,
Thanks much for the link.
Thank you
SuriPublished by: Suri on January 26, 2012 20:38
-
help join you a table nested with ordinary table
IM creating a nested table object prtcnpt_info codelist. In a block anonymous im saying t_code as type nested table codelist.
Now when I try to join the table nested to ordinary table oracle DB and I get the error: PL/SQL: ORA-00904: "COLUMN_VALUE": invalid identifier.
Please help me on this and provide link tutorial about this concepts... Here is the code I wrote
-Start code.
create or replace type prtcnpt_info as an object (identification number
, name varchar2 (200)
(, code varchar2 (30));
create type codelist is the prtcnpt_info table;
declare
t_code codelist.
Start
Select prtcnpt_info (b.pid, b.name, pt.code) in bulk collect into t_code
party pt
mc_code b
where pt.cd in ("AAA", "BBB")
and pt.ptype_id = b.pt_type_id;
INSERT INTO table (ID
RUN_ID
DATA
P_ID
)
SELECT id
run_id
data
prtct.id-> 1
IN table_2 t2
, (by selecting column_value in table (t_code)) prtct
WHERE prtct.id = t2. P_ID; -> 2
end;
-End code;
also of the anonymous block
1 = > is this right until you get the id value (b.pid) of the tablet_code nested as prtct alias?
2 = > is this right until you reach the nested with ordinary table table? I want to join the id column in the tables.
Published by: 914912 on April 30, 2012 02:11Write the insert like this and try
insert into table ( id , run_id , data , p_id ) select id, run_id, data, prtct.id from table_2 t2 table(t_code) prtct where prtct.id = t2.p_id;
-
bulk collect into the collection of objects
create or replace type typ_obj as an object
(l_x number (10),)
l_y varchar2 (10),
LZ varchar2 (10)
);
Create the type typ_obj_tt is table of the typ_obj;
/ / DESC insert_table;
c_x number (10)
C_P number (10)
temp2_table / / DESC
doJir number (10)
c_y varchar2 (10)
c_z varchar2 (10)
procedure prc_x (p_obj_out ON typ_obj_tt)
is
cursor c1
is
Select t1.c_x,
T2.c_y,
T2.c_z
Of
insert_table t1,
temp2_table t2
where
....
;
Start
Open c1;
collect the fetch c1 into loose in p_obj_out;
Close c1;
end;
raises the error
can you tell
How to enter data in this object of type of output table
Thanks in advance... any help will be much appreciated...PL do not spam the forums with topics in double - bulk collect into the collection of objects
-
Using the slider for and BULK COLLECT INTO
Hi all
in this case we prefer to use the cursor AND the cursor with the LOOSE COLLECTION? The following contains two block this same query where used FOR the slider, the other is using COLLECT LOOSE. The task that is running better given in the existing? How do we measure performance between these two?
I use the example of HR schema:
In this code, I put a timestamp in each block, but they are useless, since they both launched virtually instantaneous...declare l_start number; BEGIN l_start:= DBMS_UTILITY.get_time; dbms_lock.sleep(1); FOR employee IN (SELECT e.last_name, j.job_title FROM employees e,jobs j where e.job_id=j.job_id and e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name) LOOP DBMS_OUTPUT.PUT_LINE ('Name = ' || employee.last_name || ', Job = ' || employee.job_title); END LOOP; DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs'); END; / declare l_start number; type rec_type is table of varchar2(20); name_rec rec_type; job_rec rec_type; begin l_start:= DBMS_UTILITY.get_time; dbms_lock.sleep(1); SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j where e.job_id=j.job_id and e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name; for j in name_rec.first..name_rec.last loop DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j)); END LOOP; DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs'); end; /
Best regards
Val(1) bulk fired fresh primary use is to reduce the change of context between sql and pl sql engine.
(2), you should always use LIMIT when it comes with bulk collect, this does not increase the load on the PGA.
(3) and the ideal number of BOUNDARY lines is 100.Also if you really want to compare performance improvements between the two different approaches to sql pl try to use the package of runstats tom Kyte
http://asktom.Oracle.com/pls/Apex/asktom.download_file?p_file=6551378329289980701
-
Bulk collect into the record type
Sorry for the stupid question - I do something really simple wrong here, but can not understand. I want to choose a few rows from a table in a cursor, then in bulk it collect in a folder. I'll possibly extended the record to include additional fields that I will select return of functions, but I can't get this simple test case to run...
PLS-00497 is the main error.
Thanks in advance.create table test ( id number primary key, val varchar2(20), something_else varchar2(20)); insert into test (id, val,something_else) values (1,'test1','else'); insert into test (id, val,something_else) values (2,'test2','else'); insert into test (id, val,something_else) values (3,'test3','else'); insert into test (id, val,something_else) values (4,'test4','else'); commit; SQL> declare 2 cursor test_cur is 3 (select id, val 4 from test); 5 6 type test_rt is record ( 7 id test.id%type, 8 val test.val%type); 9 10 test_rec test_rt; 11 12 begin 13 open test_cur; 14 loop 15 fetch test_cur bulk collect into test_rec limit 10; 16 null; 17 exit when test_rec.count = 0; 18 end loop; 19 close test_cur; 20 end; 21 / fetch test_cur bulk collect into test_rec limit 10; * ERROR at line 15: ORA-06550: line 15, column 38: PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list ORA-06550: line 17, column 21: PLS-00302: component 'COUNT' must be declared ORA-06550: line 17, column 2: PL/SQL: Statement ignored
You must declare an array based on your registration type.
DECLARE CURSOR test_cur IS SELECT id, val FROM test ; type test_rt IS record ( id test.id%type, val test.val%type); type test_rec_arr is table of test_rt index by pls_integer; test_rec test_rec_arr; BEGIN OPEN test_cur; LOOP FETCH test_cur bulk collect INTO test_rec limit 10; NULL; EXIT WHEN test_rec.count = 0; END LOOP; CLOSE test_cur; END; 31 / PL/SQL procedure successfully completed. Elapsed: 00:00:00.06 ME_XE?
Notice that the difference is...
type test_rec_arr is table of test_rt index by pls_integer; test_rec test_rec_arr;
-
bulk collect into after insertion?
The INSET below works great!
BUT when combined with an INSERT... SELECT, as follows:insert into GL_Interface_Control_temp (Set_Of_Books_ID ,Group_ID ,JE_Source_Name ,Interface_Run_ID) values ( '2' ,10 ,'TEST_SOURCE' ,'2' ) returning ty_GLI (Set_Of_Books_ID ,Group_ID ,JE_Source_Name ,Interface_Run_ID ,0 ,0 ,null) bulk collect into l_GLI_Tab;
I get the following error:insert into GL_Interface_Control_temp (Set_Of_Books_ID ,Group_ID ,JE_Source_Name ,Interface_Run_ID) (select gi.Set_Of_Books_ID ,GL_Interface_Control_S.nextval ,gi.JE_Source_Name ,GL_Journal_Import_S.nextval from GL_Interface_temp gi where gi.Currency_Conversion_Date <= l_Max_Rate_Date and gi.Set_Of_Books_ID = v_Set_Of_Books_ID) returning ty_GLI (Set_Of_Books_ID ,Group_ID ,JE_Source_Name ,Interface_Run_ID ,0 ,0 ,null) bulk collect into l_GLI_Tab;
No idea why?* ERROR at line 15: ORA-00933: SQL command not properly ended
Can we not use BULK COLLECT BACK IN with an INSERT SELECT?
P;I'm sorry to tell you that the return clause can only be used with the values clause.
http://download.Oracle.com/docs/CD/E11882_01/server.112/e10592/statements_9014.htm#i2121671
Alessandro Bye
-
Select Insert and bulk collect into exadata
Hi all
We work in Oracle Exadata. I am new to oracle exacta. We need to insert some 7.5 million records in our table. My manager who has worked at exadata asked me to insert, select to load the data of 7.5 million records from one table to another. He asked me to forget fired fresh concepts in bulk. I read exadata prefer basic set rank of basic techniques. Is in bulk collect not a technical set base? Select Insert or in bulk which is a better technique, gather to load records from one table to the other in oracle exadata? Please advise me on this issue
Mantra of Tom apply for Exadata and, in its follow-up here:
Ask Tom & quot; DML only insert & #x2F; Select or bulk collect... & quot;
-
In BULK COLLECT IN the Table "Record".
Hello
I can't find a solution for this case. I try to extract the data in an array of different records with subtype records.
-Table example:
CREATE THE TABLE1 TABLE:
("COLUMN1" NUMBER,
"COLUMN2" NUMBER,
"COLUMN3" NUMBER,
"COLUMN4" NUMBER,
COLUMN '5' NUMBER,
"COLUMN6" NUMBER
);
-Sample data:
INSERT INTO TABLE1 (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN 5, COLUMN6) VALUES ('11', '12', '13', '14', 15', 16');
INSERT INTO TABLE1 (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN 5, COLUMN6) VALUES ('21', '22', '23', '24', 25', 26');
INSERT INTO TABLE1 (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN 5, COLUMN6) VALUES ('31', '32', '33', '34', 35', 36');
INSERT INTO TABLE1 (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN 5, COLUMN6) VALUES ('41', '42', '43', '44,' 45', 46');
INSERT INTO TABLE1 (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN 5, COLUMN6) VALUES ('51', '52', '53', '54,' 55' 56');
Here, I stated some of the columns of the table as individual records / use in the RETURN of some of the features clause.
DECLARE
TYPE t_col_group1 IS RECORD (col2 of TABLE1. COLUMN2% TYPE
col3 TABLE1. COLUMN3% TYPE
);
TYPE t_col_group2 IS RECORD (col4 TABLE1. COLUMN2% TYPE
col5 TABLE1. COLUMN3% TYPE
);
TYPE t_coll_collection IS RECORD (col1 TABLE1. % TYPE COLUMN1
col_group1 t_col_group1
col_group2 t_col_group2
col6 TABLE1. % TYPE COLUMN1
);
TYPE t_table IS the TABLE OF t_coll_collection INDEX DIRECTORY.
v_table t_table;
CURSOR c_table IS
SELECT COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN 5, COLUMN6
FROM TABLE1;
BEGIN
OPEN c_table.
Get the c_table COLLECT in BULK IN v_table;
CLOSE C_table;
END;
I can ' manage to get the data in the table with the COLLECTION in BULK:
Fehlerbericht:
ORA-06550: Line 25, column 35:
PLS-00597: expression "V_TABLE" in the list IS of the wrong type
ORA-06550: Line 25, column 3:
PL/SQL: SQL statement ignored
06550 00000 - "line %s, column % s:\n%s".
* Cause: Usually a PL/SQL compilation error.
* Action:
Thank you!
André
My environment:
Oracle Database 11 g Release 11.2.0.3.0 - 64 bit Production PL/SQL Release 11.2.0.3.0 - Production CORE Production 11.2.0.3.0 AMT for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
You must define the SQL type
create or replace type t_col_group1 as an object (col2, col3 number number)
/
create or replace type t_col_group2 as an object (col4 number, number col5)
/
create or replace type t_coll_collection as an object
(
number of col1
col_group1 t_col_group1
col_group2 t_col_group2
number col6
)
/
create or replace type t_table as the t_coll_collection table
/declare
v_table t_table;cursor c_table
is
Select t_coll_collection
(
Column1
t_col_group1 (column2, column3)
t_col_group2 (column4, column5)
column6
)
FROM table1;
Start
Open c_table;
collect the fetch c_table in bulk in v_table;
close c_table;
end;
/ -
Bulk collect into the statement
Hi all
I'm trying to copy data using the database link.
I'm using the Oracle 11 g on Windows.
I check the data in the source table and it exists. When I run under block it is running successfully but do return no data in the target database.
SET SERVEROUTPUT ON
DECLARE
TYPE of t_bulk_collect_test_1 IS the TABLE OF NT_PROP % ROWTYPE;
l_tab1 t_bulk_collect_test_1;
CURSOR c_data1 IS
SELECT *.
OF NT_PROP@dp_copy;
BEGIN
OPEN c_data1.
LOOP
EXTRACTION c_data1
LOOSE COLLECTION l_tab1 LIMIT 10000;
commit;
WHEN OUTPUT l_tab1.count = 0;
END LOOP;
CLOSE C_data1;
END;
/
Could you get it someone please let me know what is the error in this code.
Thanks in advance
Bulk operation will not improve performance. Your code is a good example to show how to write code that is a performance nightmare. I'm sorry to say this, but it's the truth.
Static SQL is the fastest way to do it. As you will transfer the data via a DB connection it will be certain overhead. But the block is not the solution for it. Make a simple INSERT INTO... SELECT... Beh It's the best way to do what you want.
-
How to use in bulk collect into clause
Hi all
I need like this, I want to change the query by transmitting the names of the tables in oracle database 10 g 2.
as if I use first of all, I spend select it by name of table scott.emp * from scott.emp;
so I want to spend scott.dept table name select * from scott.dept;
using select * option in the select list.
How can I run it.
Give me a solution.
Please answer...Execute Immediate is in fact an option, that you can not use it because you have a variable select list.
You can use DBMS_SQL or REF CURSOR. The Ref Cursor example is given below:
var oput_cur refcursor var tabname varchar2(30) exec :tab_name := 'dual'; begin open :oput_cur for 'select * from ' || :tabname; end; / print oput_cur
-
Hello
Can you help me to sort (based on join date) below table nested using the TABLE function?
declare
type is rendered
(emp_id char (20),
date of join_d);
type emp_tbl_type is table of SheikYerbouti;non_sorted emp_tbl_type;
asc_sorted emp_tbl_type;
Start
-create sample datanon_sorted (1) .emp_id: = '1';
non_sorted (1) .join_d: = sysdate + 5;
non_sorted (2) .emp_id: = '2';
non_sorted (2) .join_d: = sysdate + 1;
non_sorted (3) .emp_id: = '3';
non_sorted (3) .join_d: = sysdate + 7;
non_sorted (4) .emp_id: = '4';
non_sorted (4) .join_d: = sysdate + 7;for me in non_sorted.first... loop of non_sorted. Last
dbms_output.put_line ('Emp ID :'|| non_sorted (i) .emp_id |) ' Date :'|| non_sorted (i) .join_d);
end loop;Select emp_id, join_d
LOOSE COLLECTION asc_sorted
table (cast (non_sorted AS emp_tbl_type))
order of join_d;
-can you sort these data and move it to asc_sortedbecause me in 1... loop asc_sorted. Count
dbms_output.put_line ('Emp ID :'|| asc_sorted (i) .emp_id |) ' Date :'|| asc_sorted (i) .join_d);
end loop;
end;Thank you!
Thanks I fixed the issue after reading the oracle documentation.
declare
non_sorted emp_tbl_type: = emp_tbl_type();
asc_sorted emp_tbl_type: = emp_tbl_type();/ * cursor asc_cur is
Select emp_id, join_d
table (CAST (non_sorted AS emp_tbl_type))
order of join_d; */Start
-create sample data
non_sorted.extend;
non_sorted (1): = emp_rec('1',sysdate+5);
non_sorted.extend;
non_sorted (2): = emp_rec('2',sysdate+1);
non_sorted.extend;
non_sorted (3): = emp_rec('3',sysdate+7);
non_sorted.extend;
non_sorted (4): = emp_rec('4',sysdate+3);for me in non_sorted.first... loop of non_sorted. Last
dbms_output.put_line ('Emp ID :'|| non_sorted (i) .emp_id |) ' Date :'|| non_sorted (i) .join_d);
end loop;dbms_output.put_line ('* UNSORTED data ');
Select emp_rec (emp_id, join_d)
bulk collect into asc_sorted
table (CAST (non_sorted AS emp_tbl_type))
order of join_d;dbms_output.put_line ('* UNSORTED data ');
because me in 1... loop asc_sorted. Countdbms_output.put_line ('Emp ID :'|| asc_sorted (i) .emp_id |) ' Date :'|| asc_sorted (i) .join_d);
end loop;end;
THX
-
Problem with BULK collect and variable of Table type
Hi all
I defined a record type and then set an index - by table of this record type and in bulk has collected the data as shown in the code below. All this was done in an anonymous block.
Then when I tried to set the record as an object type and not the above activities type, I got the below error:
ORA-06550: line 34, column 6:
PL/SQL: ORA-00947: not enough values
ORA-06550: line 31, column 4:
PL/SQL: SQL statement ignored
Could you help me get the result of the first scenario with record type defined as an object?
Here's the code for num_char_object_1/* Formatted on 2009/08/03 17:01 (Formatter Plus v4.8.8) */ DECLARE TYPE obj_attrib IS TABLE OF num_char_object_1 INDEX BY PLS_INTEGER; obj_var obj_attrib; TYPE num_char_record IS RECORD ( char_attrib VARCHAR2 (100), num_attrib NUMBER ); TYPE rec_attrib IS TABLE OF num_char_record INDEX BY PLS_INTEGER; rec_var rec_attrib; BEGIN SELECT first_name, employee_id BULK COLLECT INTO rec_var FROM employees WHERE ROWNUM <= 10; FOR iloop IN rec_var.FIRST .. rec_var.LAST LOOP DBMS_OUTPUT.put_line ( 'Loop.' || iloop || rec_var (iloop).char_attrib || '###' || rec_var (iloop).num_attrib ); END LOOP; SELECT first_name, employee_id BULK COLLECT INTO obj_var FROM employees WHERE ROWNUM <= 10; END;
CREATE OR REPLACE TYPE NUM_CHAR_OBJECt_1 IS OBJECT ( char_attrib VARCHAR2 (100), num_attrib NUMBER );
Welcome to the forum!
You should be collecting objects in bulk, something like
SELECT NUM_CHAR_OBJECt_1 (first_name, employee_id) BULK COLLECT INTO obj_var FROM emp WHERE ROWNUM <= 10;
-
iHi.
Declare cursor c_1 is select col1,col2,col3,col4 from table1 type t_type is table of c_1%rowtype index by binary_integer; v_data t_type; BEGIN OPEN c_1; LOOP FETCH c_1 BULK COLLECT INTO v_data LIMIT 200; EXIT WHEN v_data.COUNT = 0; FORALL i IN v_data.FIRST .. v_data.LAST INSERT INTO xxc_table (col1, col3, col4 ) SELECT v_data (i).col1, v_data (i).col3, v_data (i).col4 FROM DUAL WHERE NOT EXISTS (SELECT 1 FROM xxc_table a WHERE col1=col1 ..... ); --commit; INSERT INTO xxc_table1 (col1, col2, col3, col4 ) SELECT v_data (i).col1, v_data (i).col2, v_data (i).col3, 'Y' FROM DUAL WHERE NOT EXISTS (SELECT 1 FROM xxc_table1 a WHERE col1=col1 ..... ); --exit when c_1%notfound; END LOOP; CLOSE c_1; commit; END;
I get 40/28-PLS-00201: identifier 'I' must be declared what the problem in the above code please help me and I have lakhs of data
Thank you
Post edited by: Rajesh123 I changed IDX
Post edited by: Rajesh123 changed t_type c_1 in Fetch
But by using a SET of INSERT to insert into two tables at once in the same query would do the job without any collection of bulk of PL and avoid to query two times too.
for example, as a single INSERT...
SQL > create table table1 as
2. Select 1 as col1, col2 of 1, 1 as col3, 1 as col4 Union double all the
3 select 2,2,2,2 of all the double union
4 Select 3,3,3,3 Union double all the
5 Select 4,4,4,4 of all the double union
6 select 5,5,5,5 of all the double union
7 select 6,6,6,6 of all the double union
8 select 7,7,7,7 of all the double union
9 select 8,8,8,8 of all the double union
10. Select 9,9,9,9 to the Union double all the
11. Select double 10,10,10,10
12.Table created.
SQL > create table xxc_table like
2. Select 1 as col1, col3 2, 3 as col4 Union double all the
3. Select the 3, 4, 5 Union double all the
4. Select the 5, 6, 7 double
5.Table created.
SQL > create table xxc_table1 like
2. Select 3 as col1, col2, col3, 5 4 "n" as col4 Union double all the
3. Select the 6, 7, 8, double "n"
4.Table created.
SQL > insert all
2 when the xt_insert is null then
3 in xxc_table (col1, col3, col4)
4 values (col1, col3, col4)
5 when the xt1_insert is null then
6 in xxc_table1 (col1, col2, col3, col4)
7 values (col1, col2, col3, 'Y')
8. Select t1.col1 t1.col2, t1.col3, t1.col4
9, xt.col1 as xt_insert
10, xt1.col1 as xt1_insert
11 from table1 t1
12 left join external xxc_table xt (t1.col1 = xt.col1)
13 left xt1 xxc_table1 outer join (t1.col1 = xt1.col1)
14.15 rows created.
SQL > select * from xxc_table by 1.
COL1 COL3 COL4
---------- ---------- ----------
1 2 3
2 2 2
3 4 5
4 4 4
5 6 7
6 6 6
7 7 7
8 8 8
9 9 9
10-10-1010 selected lines.
SQL > select * from xxc_table1 by 1.
COL1 COL2 COL3 C
---------- ---------- ---------- -
1 1 1 Y
2 2 2 Y
3 4 5 N
4 4 4 Y
5 5 5 Y
6 7 8 N
7 7 7 Y
8 8 8 Y
9 9 9 Y
10-10-1010 selected lines.
SQL >
Maybe you are looking for
-
HP Envy: Envy of HP 4520 all-in-one and Windows 10
Hi team! We purchased through amazon.de 4520 Envy HP all-in-one. Just tried to install and connect wirelessly to my HP Pavilion G6. Our laptop was free update Windows 10 about 6 months, he has also kindly invited by HP. The installation of the HP Env
-
Postgres error - unable to complete the operation
Server 5, running like a dog Profile Manager. Ive used the "Proxy timeout" fix Apache posted on these forums who helped for awhile, but now (without changing anything) Profile Manager is back to crashing often. If I type sudo serveradmin beginning p
-
Slow WIFI, HP Pavilion dv6 6102eo communication
My computer is running bad in my local WIFI. I think 40-50 Mb/s but more often get 10-17 Mb / s. (I run win 7-64 bit on this computer) there is no other traffic. Intresting, is that I have another computer a Compac CQ, running on Linux (Ubuntu) 64-bi
-
original title: vision problems. How can I get bigger fonts in incoming messages on Outlook Express? How can I change the background color in Outlook Express for a light blue to improve my vision of the messages.e I have problems with my vision. H
-
Can I add an additional hard drive storage?
A 40 GB hard drive, I installed in my computer and I would like to know what would be the best practice to install an additional hard drive?