Of many level replication table

Hi all

I was configuration of replication of flow between a table at many tables (3) in the same database (10.2.0.4)
Below figure states my requirement.

                                    |--------->TEST2.TAB2(Destination)  
                                    |
TEST1.TAB1(Source) ---------------->|--------->TEST3.TAB3(Destination)
                                    |
                                    |--------->TEST4.TAB4(Destination)
Here are the steps I followed. But replication does not work.
CREATE USER strmadmin 
IDENTIFIED BY strmadmin
/ 

GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE, DBA to strmadmin; 

BEGIN 
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( 
grantee => 'strmadmin', 
grant_privileges => true); 
END; 
/ 

check  that the streams admin is created: 
SELECT * FROM dba_streams_administrator; 


SELECT supplemental_log_data_min,
supplemental_log_data_pk,
supplemental_log_data_ui,
supplemental_log_data_fk,
supplemental_log_data_all FROM v$database;

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

alter table test1.tab1 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
alter table test2.tab2 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
alter table test3.tab3 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
alter table test4.tab4 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

conn strmadmin/strmadmin

var first_scn number;
set serveroutput on
DECLARE  scn NUMBER;
BEGIN
  DBMS_CAPTURE_ADM.BUILD(
         first_scn => scn);
  DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
  :first_scn := scn;
END;
/

exec dbms_capture_adm.prepare_table_instantiation(table_name=>'test1.tab1');

begin
dbms_streams_adm.set_up_queue( 
queue_table => 'strm_tab', 
queue_name => 'strm_q', 
queue_user => 'strmadmin'); 
end; 
/ 

var first_scn number;
exec :first_scn:= 2914584

BEGIN
  DBMS_CAPTURE_ADM.CREATE_CAPTURE(
     queue_name         => 'strm_q',
     capture_name       => 'capture_tab1',
     rule_set_name      => NULL,
     source_database    => 'SIVIN1',
     use_database_link  => false,
     first_scn          => :first_scn,
     logfile_assignment => 'implicit');
END;
/

BEGIN
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
     table_name         => 'test1.tab1',
     streams_type       => 'capture',
     streams_name       => 'capture_tab1',
     queue_name         => 'strm_q',
     include_dml        => true,
     include_ddl        => false,
     include_tagged_lcr => true,
     source_database    => 'SIVIN1',
     inclusion_rule     => true);
END;
/


BEGIN  
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name         => 'test2.tab2',
          streams_type       => 'apply',
          streams_name       => 'apply_tab2',
          queue_name         => 'strm_q',
          include_dml        => true,
          include_ddl        => false,
          include_tagged_lcr => true,
          source_database    => 'SIVIN1',
          inclusion_rule     => true);
END;
/

BEGIN  
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name         => 'test3.tab3',
          streams_type       => 'apply',
          streams_name       => 'apply_tab3',
          queue_name         => 'strm_q',
          include_dml        => true,
          include_ddl        => false,
          include_tagged_lcr => true,
          source_database    => 'SIVIN1',
          inclusion_rule     => true);
END;
/

BEGIN  
  DBMS_STREAMS_ADM.ADD_TABLE_RULES(
          table_name         => 'test4.tab4',
          streams_type       => 'apply',
          streams_name       => 'apply_tab4',
          queue_name         => 'strm_q',
          include_dml        => true,
          include_ddl        => false,
          include_tagged_lcr => true,
          source_database    => 'SIVIN1',
          inclusion_rule     => true);
END;
/

select STREAMS_NAME,
      STREAMS_TYPE,
      TABLE_OWNER,
      TABLE_NAME,
      RULE_TYPE,
      RULE_NAME
from DBA_STREAMS_TABLE_RULES;

begin  
  dbms_streams_adm.rename_table(
       rule_name       => 'TAB245' ,
       from_table_name => 'test1.tab1',
       to_table_name   => 'test2.tab2',
       step_number     => 0,
       operation       => 'add');
end;
/

begin  
  dbms_streams_adm.rename_table(
       rule_name       => 'TAB347' ,
       from_table_name => 'test1.tab1',
       to_table_name   => 'test3.tab3',
       step_number     => 0,
       operation       => 'add');
end;
/

begin  
  dbms_streams_adm.rename_table(
       rule_name       => 'TAB448' ,
       from_table_name => 'test1.tab1',
       to_table_name   => 'test4.tab4',
       step_number     => 0,
       operation       => 'add');
end;
/


col apply_scn format 999999999999
select dbms_flashback.get_system_change_number apply_scn from dual;

begin  
  dbms_apply_adm.set_table_instantiation_scn(
  source_object_name   => 'test1.tab1',
  source_database_name => 'SIVIN1',
  instantiation_scn    => 2916093);
end;
/

exec dbms_capture_adm.start_capture('capture_tab1');

exec dbms_apply_adm.start_apply('apply_tab2');
exec dbms_apply_adm.start_apply('apply_tab3');
exec dbms_apply_adm.start_apply('apply_tab4');
Could someone help me please... Please let me where I've gone wrong.

If the steps above are not correct, then please let me know the desired measures.

-Yasser

First of all that I suggest to implement next to a single destination.

Here is a good example, what I've done

Just use it and test. Then prepare table your other schema (3 destination I mean)

ALTER system set global_names = TRUE scope = both;

Oracle@ULFET-laptop: / MyNewPartition/oradata/my$ mkdir Archive

immediate stop
Startup mount
ALTER database archivelog
change the database open

ALTER SYSTEM SET log_archive_format='MY_%t_%s_%r.arc' SCOPE = spfile;

ALTER SYSTEM SET log_archive_dest_1 = ' location = / MyNewPartition/oradata/MY/Archive MANDATORY ' SCOPE = spfile;

# alter system set streams_pool_size = 25 M scope = both;

create tablespace streams_tbs datafile ' / MyNewPartition/oradata/MY/streams_tbs01.dbf' size 25 M autoextend on maxsize unlimited;

grant dba to strmadmin identified by Brooks;

change the quota user strmadmin unlimited tablespace streams_tbs default on streams_tbs;

exec dbms_streams_auth.grant_admin_privilege (-)
dealer-online "strmadmin, -.
grant_privileges-online true)

grant dba to demo identified by demo;

create table DEMO. EMP in select * from HR. EMPLOYEES;

ALTER table demo.emp add the primary key constraint emp_emp_id_pk (employe_id);

Start
() dbms_streams_adm.set_up_queue
queue_table-online "strmadmin.streams_queue_table."
queue_name-online 'strmadmin.streams_queue');
end;
/

SELECT name, queue_table from dba_queues where owner = 'STRMADMIN;

set linesize 150
Col rule_owner to a10
Select rule_owner, streams_type, streams_name, rule_set_name, dba_streams_rules nom_regle;

BEGIN
() dbms_streams_adm.add_table_rules
table-name => ' HR. EMPLOYEES,
streams_type-online "CAPTURE."
streams_name-online "CAPTURE_EMP."
queue_name => ' STRMADMIN. STREAMS_QUEUE',.
include_dml => TRUE,
include_ddl => FALSE,
inclusion_rule => TRUE);
END;
/

Select capture_name, rule_set_name, dba_capture capture_user;

BEGIN
DBMS_CAPTURE_ADM. () INCLUDE_EXTRA_ATTRIBUTE
capture_name-online "CAPTURE_EMP."
ATTRIBUTE_NAME => "USERNAME."
include-online true);
END;
/

Select source_object_owner, source_object_name, dba_apply_instantiated_objects instantiation_scn;

-no row returned - why?

DECLARE
ISCN NUMBER;
BEGIN
ISCN: = DBMS_FLASHBACK. GET_SYSTEM_CHANGE_nUMBER();
DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
source_object_name => ' HR. EMPLOYEES,
source_database_name-online "MY."
instantiation_scn-online iscn);
END;
/

Conn strmadmin/flow

SET SERVEROUTPUT ON
DECLARE
emp_rule_name_dml VARCHAR2 (30);
emp_rule_name_ddl VARCHAR2 (30);
BEGIN
DBMS_STREAMS_ADM. () ADD_TABLE_RULES
table_name-online "hr.employees."
streams_type-online "apply."
streams_name-online "apply_emp."
queue_name-online "strmadmin.streams_queue."
include_dml to-online true.
include_ddl-online fake,
source_database-online "my."
dml_rule_name-online emp_rule_name_dml,
ddl_rule_name-online emp_rule_name_ddl);
     
DBMS_OUTPUT. Put_line (' name of the DML rule: ' | emp_rule_name_dml);
DBMS_OUTPUT. Put_line (' DDL rule name: ' | emp_rule_name_ddl);
END;
/

BEGIN
DBMS_APPLY_ADM. SET_PARAMETER)
apply_name-online "apply_emp."
parameter-online "disable_on_error."
value => ' n ");"
END;
/

SELECT a.apply_name, a.rule_set_name, r.rule_owner, r.rule_name
Dba_apply a, dba_streams_rules r
WHERE the a.rule_set_name = r.rule_set_name;

-Select nom_regle value and write below - field example group16

BEGIN
DBMS_STREAMS_ADM. RENAME_TABLE)
nom_regle => ' STRMADMIN. TEMPS14 ',.
from_table_name => ' HR. EMPLOYEES,
to_table_name => ' DEMO. EMP',.
operation => "ADD"); -can be ADD or REMOVE
END;
/

BEGIN
DBMS_APPLY_ADM. () START_APPLY
apply_name-online 'apply_emp');
END;
/

BEGIN
DBMS_CAPTURE_ADM. () START_CAPTURE
capture_name-online 'capture_emp');
END;
/

change user HR identified per hour;

change the hr user account unlock;

Conn h/h

Insert the values of the employees
(400, "Ilqar", "Ibrahimov', '[email protected]', '123456789', sysdate, 'ST_MAN', 0, 30000, 110, 110);

Insert the values of the employees
(500, 'Ulfet', 'Tanriverdiyev', '[email protected]', '123456789', sysdate, 'ST_MAN', 0, 30000, 110, 110);

Conn demo/demo

grant all on emp to the public;

Select last_name, first_name from emp where employee_id = 300;

strmadmin/streams
Select apply_name, nom_file_attente, dba_apply;

Select capture_name, dba_capture State;

Tags: Database

Similar Questions

  • Which way is the best instantiate the Table level replication tables

    Hello

    I just got a doubt. Which way is the best to instantiate the tables of replication at the Table level.
    I need 20 100 tables replicate tables.
    In this way or

    DECLARE
    ISCN NUMBER;
    BEGIN
    ISCN: = DBMS_FLASHBACK. GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
    source_object_name = > ' SCOTT. EMP',.
    source_database_name = > ' DB1. WORLD ',.
    instantiation_scn = > iscn);
    DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
    source_object_name = > ' SCOTT. DEP. ',.
    source_database_name = > ' DB1. WORLD ',.
    instantiation_scn = > iscn);
    END;
    /


    This way:

    DECLARE
    ISCN NUMBER;
    BEGIN
    ISCN: = DBMS_FLASHBACK. GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
    source_object_name = > ' SCOTT. EMP',.
    source_database_name = > ' DB1. WORLD ',.
    instantiation_scn = > iscn);
    END;
    /


    DECLARE
    ISCN NUMBER;
    BEGIN
    ISCN: = DBMS_FLASHBACK. GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
    source_object_name = > ' SCOTT. DEP. ',.
    source_database_name = > ' DB1. WORLD ',.
    instantiation_scn = > iscn);
    END;
    /


    How can I apply ISCN even or the of the ISCN individual.

    Thank you
    Ray

    Hello

    Find the RCS of the source, and set if for all tables by using the dbms_apply_adm.set_table_instantiation_scn API.

    Thank you
    Florent

  • Rename two-way level replication Tables/columns / schema

    I have a bidirectional 10.2 - level schema replicated configured environment. Enforcement staff feels the need to rename tables or columns. We are DDL replication.

    It is: after we modify rename table, it is necessary/best practices/best practice to run DBMS_CAPTURE_ADM. PREPARE_SCHEMA_INSTANTIATION? We we cast no re-instantiation. In our lab tests, I did not saw that it had to be done, but curious if it must be done.

    Mike,

    You don't need to run DBMS_CAPTURE_ADM. PREPARE_SCHEMA_INSTANTIATION again after the table has been renamed or some DOF has been issued on this subject. If you rename a table, the object_id does not change. This object that spread in the target with the same object_id already has information flow existing dictionary on the site to apply it and so you do not need to rerun PREPARE_SCHEMA_INSTANTIATION on the source.

    SQL > set off head
    SQL > create table xx22 (x number);

    Table created.

    SQL > select object_id from user_objects where object_name = 'XX22;

    84717

    SQL > alter table rename xx22 XX33.

    Modified table.

    SQL > select object_id from user_objects where object_name = 'XX33. "

    84717

    If you add additional columns and you feel these columns must be connected supplementally then you can manually enable additional logging for these columns.

    Let me know if you have any other doubts.

    Thank you
    Florent

  • Merits and demerits of the replication of schema-level and Table

    Hello

    We evaluate the streams for our needs of replication. We reproduce nearly 2000 two-way tables. Most of the tables are in a schema. The main schema has a few tables that are not replicated.

    Replicate us schema level? Or at the level of the table? What are the advantages and disadvantages?

    Replication of level schema included with a handful of exclusionary rules seems simpler.

    In water courses at the level of the tables as well as creating 2000 table rules in the beginning, add a new table to an existing configuration of streams, will require additional steps to stop capture/apply/spread, creating new rules for the new table and restart replication. However, it seems that level replication table offering flexibility in terms of group tables. We wanted to group tables about 10 groups of 20 to 300 tables in each group. Each group will have its own capture, propagate and apply. In this way, we can isolate large groups of those unimportant?

    Who is a better setup in terms of maneuverability, flexibility, parallelism and performance?

    Thank you.

    Hello Stargaze,

    OK, so I understand that the grouping is based on the type of transactions and its importance. In this case the grouping would help you in many ways such as:

    1. If you encounter a problem or a bug with one of the processes of capture/apply then the other groups would work fine. Therefore, you have the opportunity to shoot a game of capture, propagation and applies until solve you the problem.

    2. in case if you perform any load batch where the transaction will affect over 10000 + records then it is possible that you could get stuck at 'SUSPENDED for FLOW CONTROL' on any of the feed items. With different groups would give you more flexibility to solve this problem by activating and deactivating groups.

    For more information on this please check the following:

    Apply process is active, inactive, and just stopped in change application

    3. with the schema level replication even if you have the advantage of being simple and have fewer rules in the set of rules, you must consider the 2 points above before you implement it. You need to test both diagrams and tables with a typical load (simulation of the production) and need to identify potential problems in both. Some more points to consider:

    -apply the game patches 10.2.0.4 or at least 10.2.0.3
    -apply all the patches required streams (Note: 437838.1> on metalink)
    -follow the recommendations as indicated:
         Note: 418755.1 broadcast 10.2.0.x.x of recommendations
         Note: 335516.1 Performance of flow recommendations
    Note: 413353.1 10.2 best practices for streams in RAC environment

    Thank you
    Florent

  • How many numbers of tables is created when family created flex.

    How many numbers of tables is created during the create flex family. I don't know _mungo, tp mungoblob, _Amap, but want to know the number of rest of tables...

    Hello

    When you create a new family of flex you create 46 new tables:

    1. _A
    2. _A_Args
    3. _A_Dim
    4. _A_DimP
    5. _A_Extension
    6. _A_Publish
    7. _A_Subtypes
    8. _C
    9. _C_AMap
    10. _C_Dim
    11. _C_DimP
    12. _C_Extension
    13. _C_Mungo
    14. _C_Publish
    15. _C_RMap
    16. _C_Rtgs
    17. _F
    18. _F_Args
    19. _F_Dim
    20. _F_DimP
    21. _F_Publish
    22. _FD_
    23. _FD_Dim
    24. _FD_DimP
    25. _FD_Publish
    26. _FD_TAttr
    27. _FD_TFilter
    28. _FD_TGroup
    29. _P
    30. _P_AMap
    31. _P_Dim
    32. _P_DimP
    33. _P_Extension
    34. _P_Group
    35. _P_Mungo
    36. _P_Publish
    37. _P_RMap
    38. _P_Root
    39. _P_Rtgs
    40. _PD
    41. _PD_Dim
    42. _PD_DimP
    43. _PD_Publish
    44. _PD_TAttr
    45. _PD_TFilter
    46. _PD_TGroup

    The mungoblobs table is shared among all the families so that you create when you create a new family.

    It will be useful.

    Gerardo

  • Skip a level of table

    Hi, I have these 3 tables (right FKs and joins) in my Dimension product:

    CITY, SUPPLIER, PRODUCT.

    The right hierarchy should be high in leaves: City-> provider-> product

    We would like to build the hierarchy of these dimensions:

    City-> product, but we get this warning in BI Admin (and the error then in the answers)

    [39008] logical dimension CITY table has a source of the CITY who do not adhere to any source of fact.

    It cannot sail in joins, the level of the intermediate jump table?
    Are we obliged to build the whole hierarchy and spend level (and table) defining a privileged way of exploration? (this solution worked)

    Thanks in advance!

    To resolve this problem, you have two options.

    Don't forget that you have in the correct join FK relationships of physical layer between these tables as follows:

    City-<><>

    In the MDB do logical table (dimension table) with source logical table product. Now, in the source of the product logic table properties general tab add physical tables and joins (add first table beg and join product >-provider, and then add the picture of the city and join provider >-City). Now, drag - move the columns in all tables that you want to use in the levels in the dimension.

    Build the dimension with levels object and columns on each level.

    Option 1 (all three levels):

    The city level: use for drilldown checked key
    Provider level: use for the uncontrolled descent of key
    Product level: use for extraction of key checked or unchecked

    Now you can dig in the answers given by the city to the product, the intermediate level is not displayed, obiee do reach across the three tables.

    Option 2 (city and produced only in levels):

    The city level: use for drilldown checked key
    Product level: use for extraction of key checked or unchecked

    Now you can dig into the responses made by the city to the product, obiee do reach across the three tables, because you set this additional physical tables and joins in logical table product source.

    Concerning
    Goran
    http://108obiee.blogspot.com

  • GoldenGate... possible database level replication?

    Hi all

    I have a test case. Is it possible to reproduce all of the base to use Golden Gate? I don't want to reproduce a particular schema. Suppose I have 3 db server. and one of these schemas will be created randomly and want to be replicated in the databases of each node, is - it possible? Any kind of help would be appreciated.

    Hello

    The level of database using Oracle GoldenGate replication is not possible.

    Oracle GoldenGate is a best-of-breed, easy to deploy, product used to replicate and integrate the transactional data with speed of subsecond among a variety of business systems. Oracle GoldenGate provides the flexibility to move data between similar systems to and heterogeneous, including different versions of database Oracle, hardware platforms, and between Oracle and the non-Oracle databases including Microsoft SQL Server, IBM DB2 for open systems and z/OS, Sybase and more.

    Kind regards

    Veera

  • Synchronization in active two-way replication tables

    Hello

    I have a table on two sites A and B with the same name and structure but with different records. How can I synchronize two tables before you activate the active replication.

    Thanks and greetings

    Mohamed Ahmed

    Hi Rudy,.

    Yes, you can... Manually, you need to insert the missing records at both sites (wherever necessary). Because to enable replication goldengate, records must be the same on both sides (two sites).

    I think that, if you go ahead with replication without organizing, then you will end up by with duplicate records. Sometimes the process may ABEND. It comes to creates data integrity problems.

    If the table has the primary key?

    Kind regards

    Veera

  • No fact to required level error table.

    I have a single dimension

    Account

    Two fact tables

    Threshold and Transaction

    These tables are attached to the Sun of account using the account number.

    Now when I create an analysis using account numbers of all three paintings, it says no table of facts to the required level. If I understand any other fact from the two fact tables, it fills the null value. He works at the time when I create a scan from a single dimension and fact tables.


    Why do we get this error and how it should be modelled.

    Published by: 979130 on December 29, 2012 12:52 AM

    I'm not sure of your data... I give a few try options than them any one should work.

    Assuming that the two facts are able to granular: you create the logical fact table in MDB add first fact as source then, Assistant to open the properties, and another fact table.

    Assuming that the two facts are to different or even granular: you create the logical fact table in MDB add first and then add 2nd fact as the 2nd source of logical table and set the tab content for these two facts

    Using the option you go for metrics and use them in the report.

    In general: you must let the BI server do know how the data is distributed on tables so that BI server can answer that you expect.

    Hope this helps, Appreciate if you mark as correct/good

    Published by: Srini VIEREN on December 29, 2012 12:58

  • Identify the level constarints table in a database

    Hi all

    Y at - it any specific query that can help me identify the constraints of level table for oracle 9i database.

    Thank you in advace

    Hi Vincent,.

    To list the constraints for all tables in the schema/user.

    Query:

    SELECT TABLE_NAME, CONSTRAINT_NAME,
           DECODE (CONSTRAINT_TYPE,
                   'C', 'Check',
                   'O', 'R/O View',
                   'P', 'Primary',
                   'R', 'Foreign',
                   'U', 'Unique',
                   'V', 'Check view'
                  ) CONSTRAINT_TYPE,
           STATUS
      FROM USER_CONSTRAINTS;
    

    Thank you
    Shankar

  • Level lock table while truncating the partition?

    Below, I use the version of oracle.

    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE Production 11.2.0.2.0
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a script to truncate the partition as below. Is there a table-level lock while truncating the parition? Any input is appreicated.

    ALTER TABLE TEMP_RESPONSE_TIME TRUNCATE PARTITION part1

    >
    Is there a table-level lock while truncating the parition?
    >
    No - it will lock the partition being truncated.

    Is there a global index on the table? If so they will be marked UNUSABLE and must be rebuilt.

    See the VLDB and partitioning Guide
    http://Oracle.Su/docs/11g/server.112/e10837/part_oltp.htm
    >
    Impact of surgery Maintenance of Partition on a Table partitioned local index
    Whenever a partition maintenance operation takes place, Oracle locks the partitions of the table concerned for any DML operation. Data in the affected partitions, except a FALL or a TRUNCATION operation, are always fully available for any operation selection. Given that clues them are logically coupled with the (data) table partitions, only the local index partitions affected table partitions must be kept as part of a partition maintenance operation, which allows the optimal treatment for the index maintenance.

    For example, when you move an old score a level of high-end storage at a level of low-cost storage, data and index are always available for SELECT operations. the maintenance of the necessary index is either update the existing index partition to reflect the new physical location of the data or, more commonly, relocation and reconstruction of the index to a level of storage partition low cost as well. If you delete an older partition once you have collected it, then its local index partitions deleted, allowing a fraction of second partition maintenance operation that affects only the data dictionary.

    Impact of surgery Maintenance of Partition on Global Indexes

    Whenever a global index is defined on a table partitioned or not partitioned, there is no correlation between a separate partition of table and index. Therefore, any partition maintenance operation affects all global indices or index partitions. As for the tables containing indexes, the affected partitions are locked to prevent the DML operations against the scores of the affected table. However, unlike the index for the local index maintenance, no matter what overall index remains fully available for DML operations and does not affect the launch of the OLTP system. On the conceptual and technical level, the index maintenance for the overall index for a partition maintenance operation is comparable to the index maintenance which would become necessary for a semantically identical DML operation.

    For example, it is semantically equivalent to the removal of documents from the old partition using the SQL DELETE statement to drop old partition. In both cases, all the deleted data set index entries must be removed to any global index as a maintenance operation of normal index that does not affect the availability of an index to SELECT and DML operations. In this scenario, a drop operation represents the optimal approach: data is deleted without the expense of a conventional DELETE operation and the indices are maintained in a non-intrusive way.

  • How to find level locks table

    Oracle APPS R12

    Hi all

    How to determine the table level locks in oracle Apps R12. When I run my simultaneous program its taking too much time to complete, and two or three concurrent programs are running in parallel and taking the same base table data, so how to find the locks if it is on the table. Any help is highly appricatable.


    Thanks and greetings
    Srikkanth.M

    Hello

    How to determine the table level locks in oracle Apps R12.

    Please see old similar threads.

    Table locks
    http://forums.Oracle.com/forums/search.jspa?threadID=&q=table+locks&objid=C3&DateRange=all&userid=&NumResults=15&rankBy=10001

    When I run my simultaneous program its taking too much time to complete, and two or three concurrent programs are running in parallel and to the same base table data

    Are these concurrent programs custom or seeded? You're correctly concurrent managers? Please see these documents for more details.

    A comprehensive approach to the performance of Oracle Applications systems [ID 69565.1]
    What is the size of "Cache" recommended for a Manager Standard [ID 986228.1]
    Troubleshooting problems of performance in the Oracle Applications / E-Business Suite [ID 117648.1]

    Thank you
    Hussein

  • One to many with several tables on one side and a table on the side "several."

    Sorry for the confusion in the title. Here's my question. In my program, I have 2 different tables that store the 2 different types of entities. Each entity has a list of attachments that I stored in a table of common attachments. There is a one-to-many relationship between the entity and the attachments table tables.

    () ENTITY_ONE
    ID
    NAME
    )

    () ENTITY_TWO
    ID
    NAME
    )

    ATTACHMENTS)
    ID
    ENTITY_ID
    ATTACHMENT_NAME
    )

    ENTITY_ID of attachments table is used to link the attachments to the one entity or two entity. All codes are generated by a single sequence. Thus, they are always unique. My question is how can I map this relationship in the EntityOne, EntityTwo and attachment JAVA class?

    For EntityOne and EntityTwo, you can simply define a normal OneToMany mapping using the foreign key.
    You use the TopLink API or APP? JPA requires a mappedBy to the OneToMany, so this may be more difficult. You should be able to simply add a JoinColumn on the OneToMany and make the Insertable/editable = false column.

    To fix, you could either map the foreign key as a Basic (DirectToFieldMapping) and maintain in force in your model or use a VariableOneToOne to TopLink mapping (this will require a common interface on behalf of entities).

    ---
    James: http://www.eclipselink.org: http://en.wikibooks.org/wiki/Java_Persistence

  • Multimaster replication tables

    Hi all
    Just the Question:
    I have two tables that second is the first clone; I have therefore two paintings that they are one copy of the other.
    I would like to replicate all the changes first in the second and all the changes from the second to the first, so forbidden triggers.

    I think at one of the tables of a materialized view in the other table, replacing them with reflresh on commit so all changes to the table to watch would be spread to the MV.
    But my question/problem is how to propagate changes to the MV in the main table.


    Any ideas?
    Thanks in advance

    So, if your web application is to write in this way that they manipulate directly two tables with identical data you have to cheat a little.
    Do not create a MV, but create view on main table as 1:1.
    Then create instead of trigger for insert, update and delete on sight and this do nothnig trigger.
    Suppose that next view my_view triggers code call me

    create or replace trigger tg_donothing instead of insert or update or delete on my_view
    begin
    null;
    end;
    /
    

    So now when perform web aplication DML against table main DML is done. When the web application perform the same operation on DB do nothnig my_view.
    There would be a single problem if web aplication cechk SQL % ROWCOUNT to find out how many rows were affected and compare it with SQL ROWCOUNT % against actual table.

    Published by: spajdy on October 21, 2009 10:51

    Published by: spajdy on October 21, 2009 10:52

  • How to exclude the replicatio level schema tables?

    Hello

    I'm working on the replication of the oracle10g data stream.
    My type of replication is "schema".
    If anyone can help me to undersatnd, how to exclude some replication of schema in function tables.


    Thank you
    Faziarain

    user498843 wrote:
    Hi what you mean this ID: 239623.1
    This s document_id in metalink or other things?

    MetaLink

Maybe you are looking for