Prior insertion trigger problem
Hi Guyz,I am facing a strange problem or may be wrong somewhere, but can't find my problem, I hope I'll get the solution to my problem here im im.
IM updating my quantities in the table MN_ITEM_DETAILS.
SQL> DESC MN_ITEM_DETAILS
Name Null? Type
----------------------------------------- -------- ------------------
SI_SHORT_CODE NUMBER(10)
SI_CODE NUMBER(15)
ITEM_DESCP VARCHAR2(200)
ITEM_U_M VARCHAR2(6)
ITEM_QTY NUMBER(10)
ITEM_REMARKS VARCHAR2(100)
I have below master-details on before insert trigger block works very well.SQL> DESC MN_MDV_MASTER
Name Null? Type
----------------------------------------- -------- ----------------------------
MDV_NO NOT NULL NUMBER(15)
MDV_DATE DATE
WHSE_LOC VARCHAR2(15)
PROJ_WHSE VARCHAR2(30)
ACTIVITY_LOC VARCHAR2(30)
MRF_NO VARCHAR2(30)
CLIENT VARCHAR2(30)
CLIENT_PO# VARCHAR2(15)
CLIENT_PO_DATE DATE
WHSE_INCHG VARCHAR2(30)
WHSE_DATE DATE
RECD_BY VARCHAR2(30)
INSPECTED_BY VARCHAR2(30)
DRIVER_NAME VARCHAR2(30)
REMARKS VARCHAR2(200)
RECD_BY_DATE DATE
INSPECTED_DATE DATE
DRIVER_NAME_DATE DATE
CST_CENTER VARCHAR2(15)
SQL> DESC MN_MDV_DETAILS
Name Null? Type
----------------------------------------- -------- ----------------------------
MDV_NO NUMBER(15)
ITEM_CODE NUMBER(15)
ITEM_DESCP VARCHAR2(150)
ITEM_U_M VARCHAR2(6)
ITEM_QTY NUMBER(6)
ITEM_BALANCE NUMBER(10)
PROJECT VARCHAR2(15)
ACTIVITY VARCHAR2(15)
LOCATION VARCHAR2(15)
All the triggers to INSERT before & after INSERTION block level
PRE-INSERT -- ON details block level
UPDATE MN_ITEM_DETAILS
SET ITEM_QTY = NVL(ITEM_QTY,0) - NVL(:MN_MDV_DETAILS.ITEM_QTY,0)
WHERE SI_CODE = :MN_MDV_DETAILS.ITEM_CODE;
POST-INSERT MASTER BLOCK LEVEL TRIGGER
INSERT INTO MN_MRBV_MASTER(
MDV# ,
MDV_DATE ,
WHSE_LOC ,
CST_CENTER )VALUES
(:MN_MDV_MASTER.MDV_NO ,
:MN_MDV_MASTER.MDV_DATE ,
:MN_MDV_MASTER.WHSE_LOC ,
:MN_MDV_MASTER.CST_CENTER);
POST-INSERT ON DETAILS BLOCK LEVEL
INSERT INTO MN_MRBV_DETAILS(
MDV# ,
ITEM_CODE ,
ITEM_DESCP ,
ITEM_U_M ,
QTY ,
ITEM_BALANCE ,
PROJECT ,
ACTIVITY ,
LOCATION )VALUES
(:MN_MDV_DETAILS.MDV_NO ,
:MN_MDV_DETAILS.ITEM_CODE ,
:MN_MDV_DETAILS.ITEM_DESCP ,
:MN_MDV_DETAILS.ITEM_U_M ,
:MN_MDV_DETAILS.ITEM_QTY ,
:MN_MDV_DETAILS.ITEM_BALANCE,
:MN_MDV_DETAILS.PROJECT ,
:MN_MDV_DETAILS.ACTIVITY ,
:MN_MDV_DETAILS.LOCATION );
until the above works fine and its update of the MN_ITEM_DETAILS. ITEM_QTY correctlybut im using the same as above in the MASTER-DETAIL table below but do not update the ITEM_QTY in MN_ITEM_DETAILS
SQL> DESC MN_MRBV_MASTER
Name Null? Type
----------------------------------------- -------- ----------------------------
MDV# NOT NULL NUMBER(15)
MDV_DATE DATE
WHSE_LOC VARCHAR2(15)
RET_FRM_PROJECT VARCHAR2(1)
RET_FRM_CLIENT VARCHAR2(1)
CST_CENTER VARCHAR2(15)
WHSE_INCHG VARCHAR2(30)
WHSE_DATE DATE
RETURN_BY VARCHAR2(30)
INSPECTED_BY VARCHAR2(30)
RETURN_BY_DATE DATE
INSPECTED_BY_DATE DATE
DRIVER_NAME VARCHAR2(30)
DRIVER_DATE DATE
REMARKS VARCHAR2(250)
SQL> DESC MN_MRBV_DETAILS
Name Null? Type
----------------------------------------- -------- ----------------------------
MDV# NUMBER(15)
ITEM_CODE NUMBER(15)
ITEM_DESCP VARCHAR2(150)
ITEM_U_M VARCHAR2(6)
QTY NUMBER(6)
ITEM_BALANCE NUMBER(10)
PROJECT VARCHAR2(15)
ACTIVITY VARCHAR2(15)
LOCATION VARCHAR2(15)
PRE-INSERT--> here its not updating the MN_ITEM_DETAILS.ITEM_QTY table any sugesstion plz why its not updating...?
MDV_DETAILS
UPDATE MN_ITEM_DETAILS
SET ITEM_QTY = NVL(ITEM_QTY,0) + NVL(:MN_MRBV_DETAILS.QTY,0)
WHERE SI_CODE = :MN_MRBV_DETAILS.ITEM_CODE;
ConcerningHouda
Published by: houda Shareef on January 8, 2011 02:19
try to write your code in before update trigger
Tags: Oracle Development
Similar Questions
-
Hello
I'm working on form of oracle 10g, and I get the following error when inserting a record: "FRM - 40735:PRE - INSERT-triiger raised unhandled exception ORA-06503"
When I remove the trigger for insertion prior to the field, the error record sace and cancel out successfully, here's the code for the previous insert trigger:
/ * To insert the Date and the user automatically * /.
DECLARE
vb_result BOOLEAN;
BEGIN
vb_result: = pkg.check_unique ('reference', 'ref_desc': reference.ref_desc, 'ref_type',:reference.ref_type, null, null, 2);
IF (vb_result = TRUE)
THEN
msg_alert ("Description already existing < ', 'E', TRUE");
END IF;
: reference.ref_cdate: = SYSDATE;
: reference.ref_cuser: = USER;
END;
can someone help me thanks
Published by: user11811876 on August 21, 2009 10:58Hello
ORA-06503 means you used a function but do not return a value of it.
You can actually use retrun in your function, but in some cases, in my view, is not there (especially in the case of exception).
Just checking your cil_pkg.check_unique function or even zip code here.
You missed a return statement somewhere inside.
It will be useful.
Check the answer as useful / OK, if this can help you
Carole
-
I am trying to write a trigger that will update a neck date after each insertion.
CREATE OR REPLACE TRIGGER PS1. XX_CDATE
AFTER INSERT ON ps1.COMMNPLANBUDGET
Old SEO AS OLD AS new NEW
FOR EACH LINE
declare the PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
Update ps1.COMMNPLANBUDGET set CREATION_DATE = sysdate where PROJECTID =: new. PROJECTID;
commit;
END XX_CDATE;
For the same thing, I wrote the code above, it created successfully but when I am inserting a record pass date is not updated. Please let me know where I did wrong. Thanks in advanceIt's everything you want
create or replace trigger ps1.xx_cdate before insert on ps1.commnplanbudget referencing old as old new as new for each row begin :new.creation_date := sysdate; end xx_cdate;
Use a trigger to insert BEFORE and just change the: NEW value
-
Redirect data to another table using before insert trigger.
Dear all,
How can I redirect the data to another table in a before Insert trigger? My database is Oracle10g.
I have a table EMP (EMP_ID, LAST_NAME, SALARY).
I have another EMP_COPY table with the same structure. I also have a before Insert trigger on the EMP table.
Based on a condition that I have to redirect the data in table EMP_COPY. Let's say the condition is EMP_ID = 100.
I fire an insert on EMP table for example INSERT IN EMP(EMP_ID,LAST_NAME,SALARY) VALUES(100,'Dev',500).
On the inside of the front Insert trigger on the EMP table, I have the code
IF EMP_ID = 100 THEN
INSERT INTO EMP_COPY (EMP_ID, LAST_NAME, SALARY)
VALUES(:NEW.) EMP_ID,: NEW. LAST_NAME,: NEW. SALARY);
COMMIT;
ON THE OTHER
NULL;
END IF;
But the problem here is that data goes to EMP table of origin also although I don't want. He should do an Insert into EMP table only if EMP_ID! = 100.
One way has been to raise a user-defined exception inside the If statement and not handle it so that the original insert on table EMP fails but INSERT comes in the EMP_COPY table. But in this solution since the inside the trigger unhandled exception, it propagates to the calling environment. And I can't stand outside relaxation as the calling environment is a form of Oracle Apps standard that cannot be customized.
Any kind of help will be highly appreciated that I am fighting for more than two weeks.
Thanks in advance
DevRemove the autonomous transaction pragma... and then try again.
-
Parent - child Table Insert Trigger
Hello
I need assistance with an Insert trigger. I want to insert field data 'Table A' "Table B" based on a PK and FK. Someone at - it examples of code?
< code >
Table A
SEQ_NO - PK
TO FIELD
FIELD B
C FIELD
Table B
FK_SEQ_NO
FIELD B
C FIELD
< code >
When the data is stored in the 'table A' I need fill out the 'FK_SEQ_NO', based on the saved value 'Table A' "SEQ_NO" as well as the values of FIELD B and C FIELD recorded in Table B.
Any help is really appreciated, thank you.Hi Charles,
It depends exactly how you created your trigger.
In my trigger I fill WORK_PACKAGE_ID (PK) in the parent table using rmdb_work_packages_seq. NEXTVAL. The insert statement then uses: NEW. WORK_PACKAGE_ID to fill the coumn FK in table child.
You can also use ITEM_ID_SEQ. CURRVAL who will repeat the last number that was created, and I suppose that you use to fill A table.
Hope that helps,
Martin -
Make required error appears only the primary key generated in prior Database Table insert trigger
Dear all,
I am a beginner in the ADF and am under Jdeveloper Studio Edition Version 12.2.1.0.0.
I'm trying to insert a record, I created the trigger for insertion prior to get the primary key and set some other default values.
On the page, I did read-only primary key column and false required.
When I try to save - commit (Programmatic), I get errors for the required value. How can I stop this errors.
Secondly, I also tried changing the agent to disabled on the attribute View object that raised the error below:
< oracle.dfw.impl.incident.DiagnosticsDataExtractorImpl > < DiagnosticsDataExtractorImpl > < createADRIncident > < incident created 148 to key problem "DFW-99998 [oracle.jbo.PersistenceException] [oracle.jbo.server.RowReference.verifyPrimaryKeys] [Proposals]" >
Hoping for help.
Thanks and greetings
Arif Khadas
If the primary key values from DB sequence, you can follow this approach:
Using the sequence of database in ADF - Souza Waslley Blog
Oracle Fusion Middleware Technologies: ADF 11 G: generate the primary key sequence number
Otherwise, instead of DB trigger, create the DB function that retrieves the value of the PK and call stored function in the overloaded method create() entity:
-
Before Insert TRIGGER to create partitions problem
Hello
I m having a problem with the following situation in Oracle 8i:
I have a table TEST_TABLE, which is divided by the beach with a DATE column. The idea is to have a partition for each month, so the HIGH_VALUE of partitions is always the first day of the month following that represents the partition.
I created a BEFORE TRIGGER INSERT on the table TEST_TABLE, which tests if the partition for the month of registration which is being inserted exists and, in case it doesn´t, a PROC AUTONOMOUS_TRANSACTION is called to create the TRIGGER.
Running the code below one can see that even if partitions are created as expected, when you try to insert a record with a date greater than the last partition for the first time, this error is returned:
ORA-14400: inserted partition key exceeds plu legal partition key.
Note that if you run the same statement again insert, it s inserted correctly on the partition that was created the first try.
I´ll appreciate any help on this matter.
code
----------------
CREATE TABLE TEST_TABLE)
IDENTIFICATION NUMBER,
DATE OF THE DT
)
TABLESPACE USERS
PARTITION BY RANGE (DT)
(
PART_B42009 PARTITION VALUES LESS THAN (TO_DATE ('2009-01-01 00:00:00 ',' YYYY-MM-DD HH24:MI:SS ',' NLS_CALENDAR = GREGORIAN '))
LOGGING
TABLESPACE USERS
);
/
CREATE OR REPLACE PROCEDURE SP_ADD_PARTITION (TEST_TABLE P_DATE. DT % TYPE)
IS
PRAGMA AUTONOMOUS_TRANSACTION;
V_STR VARCHAR2 (500);
BEGIN
V_STR: = 'ALTER TABLE TEST_TABLE ADD.
|| 'PARTITION BIRD | TO_CHAR ("P_DATE, ' YYYYMM")
|| ' VALUES LESS (TO_DATE ("')).
|| TO_CHAR (ADD_MONTHS (P_DATE, 1), "YYYY-MM"). '-01 00:00:00 ','
|| ((' SYYYY-MM-DD HH24:MI:SS "," NLS_CALENDAR = GREGORIAN "))';
EXECUTE IMMEDIATE (V_STR);
END SP_ADD_PARTITION;
/
CREATE OR REPLACE TRIGGER TR_B_I_R_TEST_TABLE
BEFORE INSERTING
ON TEST_TABLE FOR EACH LINE
DECLARE
NUMBER OF V_PARTITION_EXISTS;
BEGIN
IF: NEW. DT > = TO_DATE ('2009-01-01 00:00:00 ',' ' YYYY-MM-DD HH24:MI:SS) THEN
IMMEDIATELY EXECUTE (' SELECT COUNT (1) ")
|| "Of all_tab_partitions atp'."
|| "WHERE atp.table_name ="table_test"'.
|| "AND the atp. Nom_partition =: v1 ')
IN V_PARTITION_EXISTS
WITH THE HELP OF "BIRD" | TO_CHAR(:NEW.) "DT,"YYYYMM";)
IF V_PARTITION_EXISTS = 0 THEN
DBMS_OUTPUT. Put_line ('Partition [' |]) 'BIRD ' | TO_CHAR(:NEW.) "DT,"YYYYMM"). does not exist!') ;
DBMS_OUTPUT. Put_line ('creation..');
SP_ADD_PARTITION (: NEW.) DT);
DBMS_OUTPUT. Put_line ('success.');
IMMEDIATELY EXECUTE (' SELECT COUNT (1) ")
|| "Of all_tab_partitions atp'."
|| "WHERE atp.table_name ="table_test"'.
|| "AND the atp. Nom_partition =: v1 ')
IN V_PARTITION_EXISTS
WITH THE HELP OF "BIRD" | TO_CHAR(:NEW.) "DT,"YYYYMM";)
IF V_PARTITION_EXISTS = 1 THEN
DBMS_OUTPUT. Put_line ("it s visible at this point..");
ON THE OTHER
DBMS_OUTPUT. Put_line ("it s not visible at this point..");
END IF;
ON THE OTHER
DBMS_OUTPUT. Put_line ('Partition [' |]) 'BIRD ' | TO_CHAR(:NEW.) DT, "YYYYMM")
|| already exists! ") ;
END IF;
END IF;
DBMS_OUTPUT. Put_line ('continues with insertion...");
END TR_B_I_R_TEST_TABLE;
/
-Go to the low score
INSERT INTO TABLE_TEST VALUES (1, TO_DATE ('2008-12-31 23:59:59 ',' YYYY-MM-DD HH24:MI:SS'));))
-Returns the error on the first try
INSERT INTO TABLE_TEST VALUES (2, TO_DATE ('2009-01-01 00:00:01 ',' YYYY-MM-DD HH24:MI:SS'));))
----------------It is the use of the pragma AUTONOMOUS TRANSACTION. Your current transaction cannot see the result of this DOF since it occurs outside of the current transaction. The clue is in the name.
Of course, you cannot run the DDL in a trigger without use of this pragma, so you're pretty much stuck. There is a solution in 11g, but that will not help you. Unfortunately, your only option is to pre-create the partitions required in front of the need. For example, you might have a DBMS JOB to create a partition for the next month, which takes place the last day of each month (or logical date of company).
Cheers, APC
blog: http://radiofreetooting.blogspot.com
-
After insert trigger mutation of error, what is the way to overcome it.
I have two tables namely profiles_answers and user_profileanswers. Based on the requirement that is: when a user inserts the answer in the table of user_profileanswers I need to calculate the weight-age of this issue with the many options available in the profiles_answers table and update this table user_profileanswers age-weight. So, for that, I wrote the following after trigger insert. But when I try to insert it throws me error mutation. As I update the table, used to insert action in the trigger. Please let me how can know I solve this problem.
create or replace
AI_weightageCaluculation TRIGGER AFTER
FOR EACH row INSERT on user_profileanswers
BEGIN
DECLARE
v_a VARCHAR2 (50);
YaeUb VARCHAR2 (50);
V_c VARCHAR2 (50);
v_d VARCHAR2 (50);
ve VARCHAR2 (50);
a_weightage NUMBER;
b_weightage NUMBER;
c_weightage NUMBER;
d_weightage NUMBER;
e_weightage NUMBER;
BEGIN
SELECT option_a, option_b, option_c, option_d, option_e IN
v_a, YaeUb, v_c, v_d, profiles_answers FROM ve
WHERE profile_questions_id =: new.profilequestion_id;
IF (v_a IS NOT NULL AND YaeUb IS NOT NULL AND v_c IS NOT NULL AND v_d IS NOT NULL AND EV IS NOT NULL) THEN
BEGIN
a_weightage: = 85;
b_weightage: = 60;
c_weightage: = 45;
d_weightage: = 30;
e_weightage: = 15;
END;
ELSIF (v_a IS NOT NULL AND YaeUb IS NOT NULL AND v_c IS NOT NULL AND v_d IS NOT NULL AND ve IS NULL) THEN
BEGIN
a_weightage: = 85;
b_weightage: = 60;
c_weightage: = 30;
d_weightage: = 15;
END;
ELSIF (v_a IS NOT NULL AND YaeUb IS NOT NULL AND v_c IS NOT NULL AND v_d IS NULL AND ve IS NULL) THEN
BEGIN
a_weightage: = 85;
b_weightage: = 45;
c_weightage: = 15;
END;
ON THE OTHER
BEGIN
a_weightage: = 85;
b_weightage: = 15;
END;
END IF;
IF: new.answer = 'A' THEN
BEGIN
UPDATE user_profileanswers
SET weightage = a_weightage
WHERE user_id =: new.user_id
AND profileanswer_id =: new.profileanswer_id
AND profilequestion_id =: new.profilequestion_id;
END;
ELSIF: new.answer = 'B' THEN
BEGIN
UPDATE user_profileanswers
SET weightage = b_weightage
WHERE user_id =: new.user_id
AND profileanswer_id =: new.profileanswer_id
AND profilequestion_id =: new.profilequestion_id;
END;
ELSIF: new.answer = 'C' THEN
BEGIN
UPDATE user_profileanswers
SET weightage = c_weightage
WHERE user_id =: new.user_id
AND profileanswer_id =: new.profileanswer_id
AND profilequestion_id =: new.profilequestion_id;
END;
ELSIF: new.answer = ' THEN
BEGIN
UPDATE user_profileanswers
SET weightage = d_weightage
WHERE user_id =: new.user_id
AND profileanswer_id =: new.profileanswer_id
AND profilequestion_id =: new.profilequestion_id;
END;
ON THE OTHER
BEGIN
UPDATE user_profileanswers
SET weightage = e_weightage
WHERE user_id =: new.user_id
AND profileanswer_id =: new.profileanswer_id
AND profilequestion_id =: new.profilequestion_id;
END;
END IF;
END;
END;
Thanks in advance.
Hmm... Why do after insertion?
CREATE OR REPLACE TRIGGER BI_weightageCaluculation
FRONT INSERT ON user_profileanswers FOR EACH ROW
BEGIN
SELECT BOX: NEW. RESPONSE
WHEN 'A' a_weight THEN
WHEN 'B' THEN b_weight
WHEN 'C' THEN c_weight
When ' THEN d_weight
Of OTHER e_weight
END
in: new.weightage
(SELECT 85 a_weight
Case option_result
WHEN 31 THEN 60
WHEN 15 THEN 60
WHEN 7 THEN 45
another 15
end b_weight
Case option_result
WHEN 31 THEN 45
WHEN 15, THEN 30
When 7 then 15
END c_weight
Case option_result
WHEN 31 THEN 30
When 15 then 15
END d_weight
Case option_result
WHEN 31 THEN 15
END e_weight
FROM (SELECT DECODE (option_a, NULL, 0, 1) +)
Decode (option_b, NULL, 0, 2) +.
Decode (option_c, NULL, 0, 4) +.
Decode (option_d, NULL, 0, 8) +.
Decode(option_e,,0,16) option_result
OF profiles_answers
WHERE profile_questions_id =: new.profilequestion_id));
END;
/
HTH
-
Hello Experts...
I'm trying to insert the value from table to another table using an insert in the after trigger
I have two tables:
RATE_TABLE: (rate_id, rate_value_user_id)
rate_avg_table: (user_id, avg_rate)
the trigger will insert the average value for each user id
the point is I want the trigger to clear all data from the rate_avg_able every time insert before the new values
so I created this trigger and it compiled successfully :)
create or replace trigger "RATING_AI".
AFTER
INSERT or update or delete on 'RATING_TABLE '.
for each line
Start
delete from rate_avg where 1 = 1;
insert into rate_avg (user_id, rate_avg)
Select user_id, AVG (rate_value)
rating
Group by user_id;
end;
the problem is when im trying to insert the new value in the rate_table
It gives me an error message that the table is changing:
ORA-04091: table RATING is changing, the trigger/function cannot see ORA-06512: at "RATING_AI", line 5 ORA-04088: error during execution of trigger 'RATING_AI '.
any ideas?
Edited by: 933151 10 may 2012 16:01If you really want to do it this way, you don't want or need a level of the trigger line, you want a statement-level trigger. This fires the statement once, not once for each row that is changed by.
create or replace trigger "RATING_AI" AFTER insert or update or delete on "RATING_TABLE" begin delete from rate_avg; insert into rate_avg (user_id, rate_avg) select user_id, AVG(rate_value) from rating_table Group by user_id; end;
That being said, however, it seems quite inefficient to recalculate the data every time. This type of computed value storage also violates standards. If there is enough of a few lines in RATING_TABLE that there is no performance problem recalculation of the average for each USER_ID whenever a row is changed, there is probably no need to store the average at all - you just might he calculate the runtime (possibly through a view). If you really need store the calculated value, because it is too expensive to calculate it at run time, you wouldn't want to certainly to her recalculate whenever the data in the underlying table has changed. Instead, you do not want to recalculate the data only for the USER_ID values in RATING_TABLE that had some kind of change. You probably want to use a materialized view that refreshes on commit for this sort of thing, so that you don't have to write code to keep summary data in sync with the detail data.
Justin
-
Hello
I have a datablock in the form that is based on a table of DB. There are a few searchable fields in the block, including the start and end dates. If I don't use no trigger prior request, reviews are interviewed very well. But I want to make a selection of beach between start and end_date fields. So I created a trigger before query and set with a new_where_clause all my query_fields and set it as DEFAULT for my DB block.
SET_BLOCK_PROPERTY (block_id, default_where, where_clause);
But when I see the toad session request, the query looks like
WHERE (Champ_1 =: 1) and Champ_1 =: 1
This means that its running the default where clause on the block and the clause of new_where at the same time.
I checked the form and I did not all calls to execute the query.
What can be the problem?Two possibilities: either concatenate values in the WHERE alike
SET_BLOCK_PROPERTY('BLOCK', ONETIME_WHERE, 'FIELD LIKE '''||:BLOCK.ITEM||'%''');
or set the values of request for items within a control block, as:
:CONTROLBLOCK.ITEM:=:BLOCK.ITEM; SET_BLOCK_PROPERTY('BLOCK', ONETIME_WHERE, 'FIELD LIKE :CONTROLBLOCK.ITEM');
-
After insert trigger. How you assign value to variable?
Was wondering if I could get a little help please... I have no experience in PL\SQL... The underside of the trigger fires when inserting another table and this trigger adds lines to another table... When you try to assign "bdate" one value from another table, I can't compile trigger... The ORA-04084 getting when you try to compile... Can someone tell me what the problem is in the code please?
Thank you
Curtis
CREATE OR REPLACE TRIGGER "HKSM. "' CHILLPK_IAR ' AFTER
INSERT ON 'CHILLPACK' FOR EACH LINE
DECLARE
bdate SAPUPLOAD.mfgdate%TYPE;
BEGIN
IF: New.sku NOT IN ("REPRISE", "EMT") THEN
IF TO_CHAR (: new .adddate, 'HH24:MI:SS'), ' 04:00:00 ' THEN
-adddate = production, last updated date
: New.adddate: =: New.adddate-1;
END IF;
-bdate = mfgdate (date of slaughter of the inventory)
Mfgdate SELECT INTO bdate INVENTORY
WHERE RTRIM (containerkey) = RTRIM(:New.containerkey)
AND RTRIM (sku) = RTRIM(:New.sku);
INSERT INTO SAPUPLOAD)
SKU,
quantity,
WGT,
mfgdate,
PFLAG,
rtype,
containerId,
batchdate)
VALUES)
: New.sku,.
: New.traysproduced,.
: New.weightproduced,.
: New.adddate,.
NULL,
'CREATE ',.
: New.containerkey,.
bdate);
END IF;
END;user3954362 wrote:
Thanks for your reply... So why doesn't this work?Create or replace TRIGGER "HKSM. "' CHILLPK_IAR ' AFTER
INSERT ON 'CHILLPACK' FOR EACH LINE
DECLARE
bdate sapupload.mfgdate%TYPE;
BEGIN
If: New.sku not in ("REPRISE", "EMT") then
If to_char (: new .adddate, 'HH24:MI:SS')< '05:30:00'="">
bdate: =: New.adddate - 1;
On the other
bdate: =: New.adddate;
End If;
Insert into SAPUPLOAD
(sku qty, wgt, mfgdate, pflag, rtype, containerid, batchdate)
Values
(: New.sku,:New.traysproduced,:New.weightproduced,:New.adddate, null, 'CREATE',:New.containerkey, bdate);
End If;
End;Because you are not trying to change a new value that you were with this line in your original post:
:New.adddate := :New.adddate -1;
-
Hi all
I thought that I could transfer it here because it might be a more appropriate forum. I have a bit of time pressure, so I hope someone can help me. I'm having a problem with a program that I wrote to acquire and store images from a camera to linear scan.
My camera (SUI Goodrich, 1024 pixels) is connected to a card framegrabber (PCIe-1427), which is connected via a RTSI cable to a PCI-6731 card attached to a SCC-68 (series) connector M. My goal is to drive a mirror galvanometer scanning and acquire images from the camera continuously on each scan. As the galvo scan will be the frame of an image (1024 pixels x 1024 lines).
I managed to do it (I think), as my direct purchase program works exactly as I expected. However, when I try to save the images, I noticed that for a larger number of images, the starting point of the image is shifted the same amount for each image. I'm not really sure what's going on, it seems to me that there is a delay in the use of the VI "IMAQ set up buffer" when a larger number of buffers is used, (or even 100). Is it possible that my hardware trigger does not wait for all buffers to be configured before you start to run?
I would really appreciate any idea or ideas that anyone could have on this issue.
Sincere greetings,
Gill
I fixed the problem by doing that I don't use actually 1000 stamps to save 1000 frames. Program works if the number of buffers is lowered.
-
I need to trigger PFI0 in labview at a specific time (every hour). The signal is available for collection in the hh:55:01 - hh:59:59. I have made several attempts, but failed each time. Please see the written description vi in, maybe you can help me.
I hope I understand your problem. You can try something like the joint. Drop 2 Untitled in the attached schema. The second trigger VI has included (without the Subvi) time control.
-
Hello
I'm having some trouble with my project. I can't understand what is wrong. The project involves building a FPGA target that uses a trigger to start sampling data and must also have pre/post trigger functions, but it does not fire when it is supposed to.
I enclose my program.
Kind regards
Tim Jansson
(It seems that I managed to double post this thread again. I apologize and I hope that it will not cause too many problems)
Never mind! I solved it!
-
Hi Experts,
I am new to Oracle. Ask for your help to fix the performance of a query of insertion problem.
I have an insert query that is go search for records of the partitioned table.
Background: the user indicates that the query was running in 30 minutes to 10 G. The database is upgraded to 12 by one of my colleague. Now the query works continuously for hours, but no result. Check the settings and SGA is 9 GB, Windows - 4 GB. DB block size is 8192, DB Multiblock read file Count is 128. Overall target of PGA is 2457M.
The parameters are given below
VALUE OF TYPE NAME
------------------------------------ ----------- ----------
DBFIPS_140 boolean FALSE
O7_DICTIONARY_ACCESSIBILITY boolean FALSE
whole active_instance_count
aq_tm_processes integer 1
ARCHIVE_LAG_TARGET integer 0
asm_diskgroups chain
asm_diskstring chain
asm_power_limit integer 1
asm_preferred_read_failure_groups string
audit_file_dest string C:\APP\ADM
audit_sys_operations Boolean TRUEAUDIT_TRAIL DB string
awr_snapshot_time_offset integer 0
background_core_dump partial string
background_dump_dest string C:\APP\PRO
\RDBMS\TRA
BACKUP_TAPE_IO_SLAVES boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
cell_offload_compaction ADAPTIVE channel
cell_offload_decryption Boolean TRUE
cell_offload_parameters string
cell_offload_plan_display string AUTO
cell_offload_processing Boolean TRUE
cell_offloadgroup_name string
whole circuits
whole big client_result_cache_lag 3000
client_result_cache_size big integer 0
clonedb boolean FALSE
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects chain
commit_logging string
commit_point_strength integer 1
commit_wait string
string commit_write
common_user_prefix string C#.
compatible string 12.1.0.2.0
connection_brokers string ((TYPE = DED
((TYPE = EM
control_file_record_keep_time integer 7
control_files string G:\ORACLE\TROL01. CTL
FAST_RECOV
NTROL02. CT
control_management_pack_access string diagnostic
core_dump_dest string C:\app\dia
bal12\cdum
cpu_count integer 4
create_bitmap_area_size integer 8388608
create_stored_outlines string
cursor_bind_capture_destination memory of the string + tell
CURSOR_SHARING EXACT stringcursor_space_for_time boolean FALSE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_big_table_cache_percent_target string 0
db_block_buffers integer 0
db_block_checking FALSE string
db_block_checksum string TYPICAL
Whole DB_BLOCK_SIZE 8192db_cache_advice string WE
db_cache_size large integer 0
db_create_file_dest chain
db_create_online_log_dest_1 string
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
db_domain chain
db_file_multiblock_read_count integer 128
db_file_name_convert chainDB_FILES integer 200
db_flash_cache_file string
db_flash_cache_size big integer 0
db_flashback_retention_target around 1440
chain of db_index_compression_inheritance NONE
DB_KEEP_CACHE_SIZE big integer 0
chain of db_lost_write_protect NONE
db_name string ORCL
db_performance_profile string
db_recovery_file_dest string G:\Oracle\
y_Area
whole large db_recovery_file_dest_size 12840M
db_recycle_cache_size large integer 0
db_securefile string PREFERRED
channel db_ultra_safe
db_unique_name string ORCL
db_unrecoverable_scn_tracking Boolean TRUE
db_writer_processes integer 1
dbwr_io_slaves integer 0
DDL_LOCK_TIMEOUT integer 0
deferred_segment_creation Boolean TRUE
dg_broker_config_file1 string C:\APP\PRO
\DATABASE\
dg_broker_config_file2 string C:\APP\PRO
\DATABASE\
dg_broker_start boolean FALSE
diagnostic_dest channel directory
disk_asynch_io Boolean TRUE
dispatchers (PROTOCOL = string
12XDB)
distributed_lock_timeout integer 60
dml_locks whole 2076
whole dnfs_batch_size 4096dst_upgrade_insert_conv Boolean TRUE
enable_ddl_logging boolean FALSE
enable_goldengate_replication boolean FALSE
enable_pluggable_database boolean FALSE
event string
exclude_seed_cdb_view Boolean TRUE
fal_client chain
fal_server chain
FAST_START_IO_TARGET integer 0
fast_start_mttr_target integer 0
fast_start_parallel_rollback string LOW
file_mapping boolean FALSE
fileio_network_adapters string
filesystemio_options chain
fixed_date chain
gcs_server_processes integer 0
global_context_pool_size string
global_names boolean FALSE
global_txn_processes integer 1
hash_area_size integer 131072
channel heat_map
hi_shared_memory_address integer 0hs_autoregister Boolean TRUE
iFile file
inmemory_clause_default string
inmemory_force string by DEFAULT
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
instance_groups string
instance_name string ORCL
instance_number integer 0
instance_type string RDBMS
instant_restore boolean FALSE
java_jit_enabled Boolean TRUE
java_max_sessionspace_size integer 0
JAVA_POOL_SIZE large integer 0
java_restrict string no
java_soft_sessionspace_limit integer 0
JOB_QUEUE_PROCESSES around 1000
LARGE_POOL_SIZE large integer 0
ldap_directory_access string NONE
ldap_directory_sysauth string no.
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
listener_networks string
LOCAL_LISTENER (ADDRESS = string
= i184borac
(NET) (PORT =
lock_name_space string
lock_sga boolean FALSE
log_archive_config string
Log_archive_dest chain
Log_archive_dest_1 chain
LOG_ARCHIVE_DEST_10 string
log_archive_dest_11 string
log_archive_dest_12 string
log_archive_dest_13 string
log_archive_dest_14 string
log_archive_dest_15 string
log_archive_dest_16 string
log_archive_dest_17 string
log_archive_dest_18 string
log_archive_dest_19 string
LOG_ARCHIVE_DEST_2 string
log_archive_dest_20 string
log_archive_dest_21 string
log_archive_dest_22 string
log_archive_dest_23 string
log_archive_dest_24 string
log_archive_dest_25 string
log_archive_dest_26 string
log_archive_dest_27 string
log_archive_dest_28 string
log_archive_dest_29 string
log_archive_dest_3 string
log_archive_dest_30 string
log_archive_dest_31 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
allow the chain of log_archive_dest_state_1
allow the chain of log_archive_dest_state_10
allow the chain of log_archive_dest_state_11
allow the chain of log_archive_dest_state_12
allow the chain of log_archive_dest_state_13
allow the chain of log_archive_dest_state_14
allow the chain of log_archive_dest_state_15
allow the chain of log_archive_dest_state_16
allow the chain of log_archive_dest_state_17
allow the chain of log_archive_dest_state_18
allow the chain of log_archive_dest_state_19
allow the chain of LOG_ARCHIVE_DEST_STATE_2allow the chain of log_archive_dest_state_20
allow the chain of log_archive_dest_state_21
allow the chain of log_archive_dest_state_22
allow the chain of log_archive_dest_state_23
allow the chain of log_archive_dest_state_24
allow the chain of log_archive_dest_state_25
allow the chain of log_archive_dest_state_26
allow the chain of log_archive_dest_state_27
allow the chain of log_archive_dest_state_28
allow the chain of log_archive_dest_state_29
allow the chain of log_archive_dest_state_3allow the chain of log_archive_dest_state_30
allow the chain of log_archive_dest_state_31
allow the chain of log_archive_dest_state_4
allow the chain of log_archive_dest_state_5
allow the chain of log_archive_dest_state_6
allow the chain of log_archive_dest_state_7
allow the chain of log_archive_dest_state_8
allow the chain of log_archive_dest_state_9
log_archive_duplex_dest string
log_archive_format string ARC%S_%R.%
log_archive_max_processes integer 4log_archive_min_succeed_dest integer 1
log_archive_start Boolean TRUE
log_archive_trace integer 0
whole very large log_buffer 28784K
log_checkpoint_interval integer 0
log_checkpoint_timeout around 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert chain
whole MAX_DISPATCHERS
max_dump_file_size unlimited string
max_enabled_roles integer 150
whole max_shared_servers
max_string_size string STANDARD
memory_max_target big integer 0
memory_target large integer 0
NLS_CALENDAR string GREGORIAN
nls_comp BINARY string
nls_currency channel u
string of NLS_DATE_FORMAT DD-MON-RR
nls_date_language channel ENGLISH
string nls_dual_currency C
nls_iso_currency string UNITED KINnls_language channel ENGLISH
nls_length_semantics string OCTET
string nls_nchar_conv_excp FALSE
nls_numeric_characters chain.,.
nls_sort BINARY string
nls_territory string UNITED KIN
nls_time_format HH24.MI string. SS
nls_time_tz_format HH24.MI string. SS
chain of NLS_TIMESTAMP_FORMAT DD-MON-RR
NLS_TIMESTAMP_TZ_FORMAT string DD-MON-RR
noncdb_compatible boolean FALSE
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
olap_page_pool_size big integer 0
open_cursors integer 300
Open_links integer 4
open_links_per_instance integer 4
optimizer_adaptive_features Boolean TRUE
optimizer_adaptive_reporting_only boolean FALSE
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 12.1.0.2optimizer_index_caching integer 0
OPTIMIZER_INDEX_COST_ADJ integer 100
optimizer_inmemory_aware Boolean TRUE
the string ALL_ROWS optimizer_mode
optimizer_secure_view_merging Boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines Boolean TRUE
OPS os_authent_prefix string $
OS_ROLES boolean FALSE
parallel_adaptive_multi_user Boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_degree_level integer 100
parallel_degree_limit string CPU
parallel_degree_policy chain MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
PARALLEL_MAX_SERVERS integer 160
parallel_min_percent integer 0
parallel_min_servers integer 16parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
pdb_file_name_convert string
pdb_lockdown string
pdb_os_credential string
permit_92_wrap_format Boolean TRUE
pga_aggregate_limit great whole 4914M
whole large pga_aggregate_target 2457M-
Plscope_settings string IDENTIFIER
plsql_ccflags string
plsql_code_type chain INTERPRETER
plsql_debug boolean FALSE
plsql_optimize_level integer 2
plsql_v2_compatibility boolean FALSE
plsql_warnings DISABLE channel: AL
PRE_PAGE_SGA Boolean TRUE
whole process 300
processor_group_name string
query_rewrite_enabled string TRUE
applied query_rewrite_integrity chain
rdbms_server_dn chain
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
Recyclebin string on
redo_transport_user string
remote_dependencies_mode string TIMESTAMP
remote_listener chain
Remote_login_passwordfile string EXCLUSIVE
REMOTE_OS_AUTHENT boolean FALSE
remote_os_roles boolean FALSEreplication_dependency_tracking Boolean TRUE
resource_limit Boolean TRUE
resource_manager_cpu_allocation integer 4
resource_manager_plan chain
result_cache_max_result integer 5
whole big result_cache_max_size K 46208
result_cache_mode chain MANUAL
result_cache_remote_expiration integer 0
resumable_timeout integer 0
rollback_segments chain
SEC_CASE_SENSITIVE_LOGON Boolean TRUEsec_max_failed_login_attempts integer 3
string sec_protocol_error_further_action (DROP, 3)
sec_protocol_error_trace_action string PATH
sec_return_server_release_banner boolean FALSE
disable the serial_reuse chain
service name string ORCL
session_cached_cursors integer 50
session_max_open_files integer 10
entire sessions 472
Whole large SGA_MAX_SIZE M 9024
Whole large SGA_TARGET M 9024
shadow_core_dump string no
shared_memory_address integer 0
SHARED_POOL_RESERVED_SIZE large integer 70464307
shared_pool_size large integer 0
whole shared_server_sessions
SHARED_SERVERS integer 1
skip_unusable_indexes Boolean TRUE
smtp_out_server chain
sort_area_retained_size integer 0
sort_area_size integer 65536
spatial_vector_acceleration boolean FALSE
SPFile string C:\APP\PRO
\DATABASE\
sql92_security boolean FALSE
SQL_Trace boolean FALSE
sqltune_category string by DEFAULT
standby_archive_dest channel % ORACLE_HO
standby_file_management string MANUAL
star_transformation_enabled string TRUE
statistics_level string TYPICAL
STREAMS_POOL_SIZE big integer 0
tape_asynch_io Boolean TRUEtemp_undo_enabled boolean FALSE
entire thread 0
threaded_execution boolean FALSE
timed_os_statistics integer 0
TIMED_STATISTICS Boolean TRUE
trace_enabled Boolean TRUE
tracefile_identifier chain
whole of transactions 519
transactions_per_rollback_segment integer 5
UNDO_MANAGEMENT string AUTO
UNDO_RETENTION integer 900undo_tablespace string UNDOTBS1
unified_audit_sga_queue_size integer 1048576
use_dedicated_broker boolean FALSE
use_indirect_data_buffers boolean FALSE
use_large_pages string TRUE
user_dump_dest string C:\APP\PRO
\RDBMS\TRA
UTL_FILE_DIR chain
workarea_size_policy string AUTO
xml_db_events string enableThanks in advance
Firstly, thank you for posting the 10g implementation plan, which was one of the key things that we were missing.
Second, you realize that you have completely different execution plans, so you can expect different behavior on each system.
Your package of 10g has a total cost of 23 959 while your plan of 12 c has a cost of 95 373 which is almost 4 times more. All things being equal, cost is supposed to relate directly to the time spent, so I expect the 12 c plan to take much more time to run.
From what I can see the 10g plan begins with a scan of full table on DEALERS, and then a full scan on SCARF_VEHICLE_EXCLUSIONS table, and then a full scan on CBX_tlemsani_2000tje table, and then a full scan on CLAIM_FACTS table. The first three of these analyses tables have a very low cost (2 each), while the last has a huge cost of 172K. Yet once again, the first three scans produce very few lines in 10g, less than 1,000 lines each, while the last product table scan 454 K lines.
It also looks that something has gone wrong in the 10g optimizer plan - maybe a bug, which I consider that Jonathan Lewis commented. Despite the full table scan with a cost of 172 K, NESTED LOOPS it is part of the only has a cost of 23 949 or 24 K. If the math is not in terms of 10g. In other words, maybe it's not really optimal plan because 10g optimizer may have got its sums wrong and 12 c might make his right to the money. But luckily this 'imperfect' 10g plan happens to run fairly fast for one reason or another.
The plan of 12 starts with similar table scans but in a different order. The main difference is that instead of a full table on CLAIM_FACTS scan, it did an analysis of index on CLAIM_FACTS_AK9 beach at the price of 95 366. It is the only main component of the final total cost of 95 373.
Suggestions for what to do? It is difficult, because there is clearly an anomaly in the system of 10g to have produced the particular execution plan that he uses. And there is other information that you have not provided - see later.
You can try and force a scan of full table on CLAIM_FACTS by adding a suitable example suspicion "select / * + full (CF) * / cf.vehicle_chass_no...". "However, the tips are very difficult to use and does not, guarantee that you will get the desired end result. So be careful. For the essay on 12 c, it may be worth trying just to see what happens and what produces the execution plan looks like. But I would not use such a simple, unique tip in a production system for a variety of reasons. For testing only it might help to see if you can force the full table on CLAIM_FACTS scan as in 10g, and if the performance that results is the same.
The two plans are parallel ones, which means that the query is broken down into separate, independent steps and several steps that are executed at the same time, i.e. several CPUS will be used, and there will be several readings of the disc at the same time. (It is a mischaracterization of the works of parallel query how). If 10g and 12 c systems do not have the SAME hardware configuration, then you would naturally expect different time elapsed to run the same parallel queries. See the end of this answer for the additional information that you may provide.
But I would be very suspicious of the hardware configuration of the two systems. Maybe 10 g system has 16-core processors or more and 100's of discs in a matrix of big drives and maybe the 12 c system has only 4 cores of the processor and 4 disks. That would explain a lot about why the 12 c takes hours to run when the 10 g takes only 30 minutes.
Remember what I said in my last reply:
"Without any contrary information I guess the filter conditions are very low, the optimizer believes he needs of most of the data in the table and that a table scan or even a limited index scan complete is the"best"way to run this SQL. In other words, your query takes just time because your tables are big and your application has most of the data in these tables. "
When dealing with very large tables and to do a full table parallel analysis on them, the most important factor is the amount of raw hardware, you throw the ball to her. A system with twice the number of CPUS and twice the number of disks will run the same parallel query in half of the time, at least. It could be that the main reason for the 12 c system is much slower than the system of 10g, rather than on the implementation plan itself.
You may also provide us with the following information which would allow a better analysis:
- Row counts in each tables referenced in the query, and if one of them are partitioned.
- Hardware configurations for both systems - the 10g and the 12 a. Number of processors, the model number and speed, physical memory, CPU of discs.
- The discs are very important - 10g and 12 c have similar disk subsystems? You use simple old records, or you have a San, or some sort of disk array? Are the bays of identical drives in both systems? How are they connected? Fast Fibre Channel, or something else? Maybe even network storage?
- What is the size of the SGA in both systems? of values for MEMORY_TARGET and SGA_TARGET.
- The fact of the CLAIM_FACTS_AK9 index exist on the system of 10g. I guess he does, but I would like that it confirmed to be safe.
John Brady
Maybe you are looking for
-
OfficeJet 8620: How to configure the folder will tune to Scan the computer?
I am able to run a computer Scan > scan to PDF from the printer control panel. The file is placed on my Macbook in the documents folder. I would like to configure the folder in that it places the file. Is this possible? Note I'm not choose Scan t
-
Satellite M100: touchpad corner functions do not work
Hello My Satellite M100 quick launch functions on my touchpad stop working. The touchpad works fine, but the corner top don t functions. He won t acknowledges me even hit the corners when I go to the quick launch of installation area. I reinstalled t
-
Satellite P100-429 cracked lid
Hello I bought the Toshiba P100.It is under a month and had a crush on the lid. Who should I contact to help me get a replacement cover please? Thank youStephen
-
Black of saving scans on a D110a
Like many here, with the Mavericks, I can scan using the software just fine HP and see perfectly analysis. However, when I try to save the scan, it is saved as a black page with horizontal white lines. All the various messages about this problem ar
-
Our company wants to integrate Labview OpenBravo [www.openbravo.com] and open source, a java-based application. OpenBravo is a complete financial, accounting, budgeting, ERP system that we will use to control most of our facilities. LabVIEW control