Replication of GoldenGate devoid of tables.

I'm looking for experiences others have had using Goldengate to replicate tables with no primary/unique key.  I am aware that Goldengate will try to use all the viable columns on the table as the key and having not a key, it can cause synchronization problems.  Other problems possible are impacts on the performance of the source database and the target.  I also know that the best approach would be to identify and implement unique keys when possible.

Everyone set up replication with Goldengate where a large number of tables don't have the keys (a diagram with at least 100 tables tables without a key, the largest in million, volatility of these tables 5000-15000 inserts/day)?  Or is there documentation Oracle providing a way to estimate the potential for performance problems?

For reference, Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production.  This would imply a new implementation - Goldengate would be a compatible version.

Thank you.

Hello

Primary key is one of the main requirements for Oracle GoldenGate. Please see the below document, which indicates the importance of the primary key and the way in which Oracle GoldenGate treats tables without primary key.,.

https://docs.Oracle.com/GoldenGate/1212/Gg-Winux/GIMSS/setup_preparation.htm#GIMSS221

Under this heading, check the below.,.

3.2.2 assignment of identifiers for line

Kind regards

Veera

Tags: Business Intelligence

Similar Questions

  • Merits and demerits of the replication of schema-level and Table

    Hello

    We evaluate the streams for our needs of replication. We reproduce nearly 2000 two-way tables. Most of the tables are in a schema. The main schema has a few tables that are not replicated.

    Replicate us schema level? Or at the level of the table? What are the advantages and disadvantages?

    Replication of level schema included with a handful of exclusionary rules seems simpler.

    In water courses at the level of the tables as well as creating 2000 table rules in the beginning, add a new table to an existing configuration of streams, will require additional steps to stop capture/apply/spread, creating new rules for the new table and restart replication. However, it seems that level replication table offering flexibility in terms of group tables. We wanted to group tables about 10 groups of 20 to 300 tables in each group. Each group will have its own capture, propagate and apply. In this way, we can isolate large groups of those unimportant?

    Who is a better setup in terms of maneuverability, flexibility, parallelism and performance?

    Thank you.

    Hello Stargaze,

    OK, so I understand that the grouping is based on the type of transactions and its importance. In this case the grouping would help you in many ways such as:

    1. If you encounter a problem or a bug with one of the processes of capture/apply then the other groups would work fine. Therefore, you have the opportunity to shoot a game of capture, propagation and applies until solve you the problem.

    2. in case if you perform any load batch where the transaction will affect over 10000 + records then it is possible that you could get stuck at 'SUSPENDED for FLOW CONTROL' on any of the feed items. With different groups would give you more flexibility to solve this problem by activating and deactivating groups.

    For more information on this please check the following:

    Apply process is active, inactive, and just stopped in change application

    3. with the schema level replication even if you have the advantage of being simple and have fewer rules in the set of rules, you must consider the 2 points above before you implement it. You need to test both diagrams and tables with a typical load (simulation of the production) and need to identify potential problems in both. Some more points to consider:

    -apply the game patches 10.2.0.4 or at least 10.2.0.3
    -apply all the patches required streams (Note: 437838.1> on metalink)
    -follow the recommendations as indicated:
         Note: 418755.1 broadcast 10.2.0.x.x of recommendations
         Note: 335516.1 Performance of flow recommendations
    Note: 413353.1 10.2 best practices for streams in RAC environment

    Thank you
    Florent

  • GoldenGate Bidirectional replication problem

    Hello

    I followed the below post for two-way replication with GoldenGate
    http://vishaldesai.WordPress.com/2011/02/24/Golden-Gate-bidirectional-replication-Oracle-to-Oracle-example/
    I am not able to do two-way.
    Although I have tested the two streams (from the source to the target and vice-verse) separately (they work well)

    my user to
    source side ggs
    ggs1 target side


    constraint unique error, certainly because of parameter TRANLOGOPTIONS EXCLUDEUSER
    but I mentioned this setting in my respective extracts.

    in the snippet next to the source, I used TRANLOGOPTIONS EXCLUDEUSER ggs1
    in the snippet next to the target, I used TRANLOGOPTIONS EXCLUDEUSER ggs

    is this correct. If yes then could you help me why my replicat is abending.


    Details of my scenario:

    source side1_

    extract
    extract soext119
    SetEnv (ORACLE_SID = GGDB)
    GGS ID USER, PASSWORD ggs12345
    RMTHOST localhost, mgrport 7810
    TRANLOGOPTIONS EXCLUDEUSER ggs1
    -GETAPPLOPS +.
    -IGNOREREPLICATES +.
    RMTTRAIL ./dirdat/rt
    -The DOF INCLUDE ALL THE +.
    TABLE ggs.stocksrc.

    replicat
    REPLICAT sorep119
    SetEnv (ORACLE_SID = GGDB)
    ASSUMETARGETDEFS
    GGS ID USER, PASSWORD ggs12345
    -The DOF INCLUDE ALL THE +.
    -HANDLECOLLISIONS +.
    CARD ggs1.stocksrc, ggs.stocksrc of the TARGET.

    face 2 source_

    replicat
    REPLICAT trep119
    SetEnv (ORACLE_SID = GGDB)
    ASSUMETARGETDEFS
    Ggs1 username, PASSWORD ggs12345
    -The DOF INCLUDE ALL THE +.
    CARD ggs.stocksrc, ggs1.stocksrc of the TARGET.

    extract
    EXTRACT text119
    SetEnv (ORACLE_SID = GGDB)
    Ggs1 username, PASSWORD ggs12345
    RMTHOST localhost, 7809 MGRPORT
    TRANLOGOPTIONS EXCLUDEUSER ggs
    -GETAPPLOPS +.
    -IGNOREREPLICATES +.
    RMTTRAIL ./dirdat/rs
    -The DOF INCLUDE ALL THE +.
    TABLE ggs1.stocksrc.




    Thank you
    Vikas

    Hello

    1. Why do you use user GG (ggs) at end of replication? Have you tried with another user?

    2. you must exclude the transaction user "ggs" in the source database and exclude ggs1 user transaction in the target database during extraction.

    If you use tables user gg for replication in bi directional certainly, you'll get the constraint violation error.

    Try the below test facility

    1. create the new user next to the source like CBC and create the stocksrc table in the schema of the CBC,

    2. create the new user in the target tgt database and create the stocksrc of the table under schema tgt.

    3 use user ggs for gg admin only in the source also ggs1 for target gg admin and exclude the user transaction gg on the two parameters of the extract.

    side source extracted:

    extract
    extract soext119
    SetEnv (ORACLE_SID = GGDB)
    GGS ID USER, PASSWORD ggs12345
    RMTHOST localhost, mgrport 7810
    TRANLOGOPTIONS EXCLUDEUSER ggs
    -GETAPPLOPS
    -IGNOREREPLICATES
    RMTTRAIL ./dirdat/rt
    -The DOF INCLUDE ALL
    TABLE src.stocksrc.

    replicat
    REPLICAT sorep119
    SetEnv (ORACLE_SID = GGDB)
    ASSUMETARGETDEFS
    GGS ID USER, PASSWORD ggs12345
    -The DOF INCLUDE ALL
    -HANDLECOLLISIONS
    CARD tgt.stocksrc, src.stocksrc of the TARGET.

    Side of the target:

    replicat
    REPLICAT trep119
    SetEnv (ORACLE_SID = GGDB)
    ASSUMETARGETDEFS
    Ggs1 username, PASSWORD ggs12345
    -The DOF INCLUDE ALL
    CARD src.stocksrc, tgt.stocksrc of the TARGET.

    extract
    EXTRACT text119
    SetEnv (ORACLE_SID = GGDB)
    Ggs1 username, PASSWORD ggs12345
    RMTHOST localhost, 7809 MGRPORT
    TRANLOGOPTIONS EXCLUDEUSER ggs1
    -GETAPPLOPS
    -IGNOREREPLICATES
    RMTTRAIL ./dirdat/rs
    -The DOF INCLUDE ALL
    TABLE tgt.stocksrc.

    HTH

  • Of many level replication table

    Hi all

    I was configuration of replication of flow between a table at many tables (3) in the same database (10.2.0.4)
    Below figure states my requirement.
    
                                        |--------->TEST2.TAB2(Destination)  
                                        |
    TEST1.TAB1(Source) ---------------->|--------->TEST3.TAB3(Destination)
                                        |
                                        |--------->TEST4.TAB4(Destination)
    Here are the steps I followed. But replication does not work.
    CREATE USER strmadmin 
    IDENTIFIED BY strmadmin
    / 
    
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE, DBA to strmadmin; 
    
    BEGIN 
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE( 
    grantee => 'strmadmin', 
    grant_privileges => true); 
    END; 
    / 
    
    check  that the streams admin is created: 
    SELECT * FROM dba_streams_administrator; 
    
    
    SELECT supplemental_log_data_min,
    supplemental_log_data_pk,
    supplemental_log_data_ui,
    supplemental_log_data_fk,
    supplemental_log_data_all FROM v$database;
    
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    
    alter table test1.tab1 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    alter table test2.tab2 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    alter table test3.tab3 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    alter table test4.tab4 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    
    conn strmadmin/strmadmin
    
    var first_scn number;
    set serveroutput on
    DECLARE  scn NUMBER;
    BEGIN
      DBMS_CAPTURE_ADM.BUILD(
             first_scn => scn);
      DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
      :first_scn := scn;
    END;
    /
    
    exec dbms_capture_adm.prepare_table_instantiation(table_name=>'test1.tab1');
    
    begin
    dbms_streams_adm.set_up_queue( 
    queue_table => 'strm_tab', 
    queue_name => 'strm_q', 
    queue_user => 'strmadmin'); 
    end; 
    / 
    
    var first_scn number;
    exec :first_scn:= 2914584
    
    BEGIN
      DBMS_CAPTURE_ADM.CREATE_CAPTURE(
         queue_name         => 'strm_q',
         capture_name       => 'capture_tab1',
         rule_set_name      => NULL,
         source_database    => 'SIVIN1',
         use_database_link  => false,
         first_scn          => :first_scn,
         logfile_assignment => 'implicit');
    END;
    /
    
    BEGIN
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
         table_name         => 'test1.tab1',
         streams_type       => 'capture',
         streams_name       => 'capture_tab1',
         queue_name         => 'strm_q',
         include_dml        => true,
         include_ddl        => false,
         include_tagged_lcr => true,
         source_database    => 'SIVIN1',
         inclusion_rule     => true);
    END;
    /
    
    
    BEGIN  
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
              table_name         => 'test2.tab2',
              streams_type       => 'apply',
              streams_name       => 'apply_tab2',
              queue_name         => 'strm_q',
              include_dml        => true,
              include_ddl        => false,
              include_tagged_lcr => true,
              source_database    => 'SIVIN1',
              inclusion_rule     => true);
    END;
    /
    
    BEGIN  
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
              table_name         => 'test3.tab3',
              streams_type       => 'apply',
              streams_name       => 'apply_tab3',
              queue_name         => 'strm_q',
              include_dml        => true,
              include_ddl        => false,
              include_tagged_lcr => true,
              source_database    => 'SIVIN1',
              inclusion_rule     => true);
    END;
    /
    
    BEGIN  
      DBMS_STREAMS_ADM.ADD_TABLE_RULES(
              table_name         => 'test4.tab4',
              streams_type       => 'apply',
              streams_name       => 'apply_tab4',
              queue_name         => 'strm_q',
              include_dml        => true,
              include_ddl        => false,
              include_tagged_lcr => true,
              source_database    => 'SIVIN1',
              inclusion_rule     => true);
    END;
    /
    
    select STREAMS_NAME,
          STREAMS_TYPE,
          TABLE_OWNER,
          TABLE_NAME,
          RULE_TYPE,
          RULE_NAME
    from DBA_STREAMS_TABLE_RULES;
    
    begin  
      dbms_streams_adm.rename_table(
           rule_name       => 'TAB245' ,
           from_table_name => 'test1.tab1',
           to_table_name   => 'test2.tab2',
           step_number     => 0,
           operation       => 'add');
    end;
    /
    
    begin  
      dbms_streams_adm.rename_table(
           rule_name       => 'TAB347' ,
           from_table_name => 'test1.tab1',
           to_table_name   => 'test3.tab3',
           step_number     => 0,
           operation       => 'add');
    end;
    /
    
    begin  
      dbms_streams_adm.rename_table(
           rule_name       => 'TAB448' ,
           from_table_name => 'test1.tab1',
           to_table_name   => 'test4.tab4',
           step_number     => 0,
           operation       => 'add');
    end;
    /
    
    
    col apply_scn format 999999999999
    select dbms_flashback.get_system_change_number apply_scn from dual;
    
    begin  
      dbms_apply_adm.set_table_instantiation_scn(
      source_object_name   => 'test1.tab1',
      source_database_name => 'SIVIN1',
      instantiation_scn    => 2916093);
    end;
    /
    
    exec dbms_capture_adm.start_capture('capture_tab1');
    
    exec dbms_apply_adm.start_apply('apply_tab2');
    exec dbms_apply_adm.start_apply('apply_tab3');
    exec dbms_apply_adm.start_apply('apply_tab4');
    Could someone help me please... Please let me where I've gone wrong.

    If the steps above are not correct, then please let me know the desired measures.

    -Yasser

    First of all that I suggest to implement next to a single destination.

    Here is a good example, what I've done

    Just use it and test. Then prepare table your other schema (3 destination I mean)

    ALTER system set global_names = TRUE scope = both;

    Oracle@ULFET-laptop: / MyNewPartition/oradata/my$ mkdir Archive

    immediate stop
    Startup mount
    ALTER database archivelog
    change the database open

    ALTER SYSTEM SET log_archive_format='MY_%t_%s_%r.arc' SCOPE = spfile;

    ALTER SYSTEM SET log_archive_dest_1 = ' location = / MyNewPartition/oradata/MY/Archive MANDATORY ' SCOPE = spfile;

    # alter system set streams_pool_size = 25 M scope = both;

    create tablespace streams_tbs datafile ' / MyNewPartition/oradata/MY/streams_tbs01.dbf' size 25 M autoextend on maxsize unlimited;

    grant dba to strmadmin identified by Brooks;

    change the quota user strmadmin unlimited tablespace streams_tbs default on streams_tbs;

    exec dbms_streams_auth.grant_admin_privilege (-)
    dealer-online "strmadmin, -.
    grant_privileges-online true)

    grant dba to demo identified by demo;

    create table DEMO. EMP in select * from HR. EMPLOYEES;

    ALTER table demo.emp add the primary key constraint emp_emp_id_pk (employe_id);

    Start
    () dbms_streams_adm.set_up_queue
    queue_table-online "strmadmin.streams_queue_table."
    queue_name-online 'strmadmin.streams_queue');
    end;
    /

    SELECT name, queue_table from dba_queues where owner = 'STRMADMIN;

    set linesize 150
    Col rule_owner to a10
    Select rule_owner, streams_type, streams_name, rule_set_name, dba_streams_rules nom_regle;

    BEGIN
    () dbms_streams_adm.add_table_rules
    table-name => ' HR. EMPLOYEES,
    streams_type-online "CAPTURE."
    streams_name-online "CAPTURE_EMP."
    queue_name => ' STRMADMIN. STREAMS_QUEUE',.
    include_dml => TRUE,
    include_ddl => FALSE,
    inclusion_rule => TRUE);
    END;
    /

    Select capture_name, rule_set_name, dba_capture capture_user;

    BEGIN
    DBMS_CAPTURE_ADM. () INCLUDE_EXTRA_ATTRIBUTE
    capture_name-online "CAPTURE_EMP."
    ATTRIBUTE_NAME => "USERNAME."
    include-online true);
    END;
    /

    Select source_object_owner, source_object_name, dba_apply_instantiated_objects instantiation_scn;

    -no row returned - why?

    DECLARE
    ISCN NUMBER;
    BEGIN
    ISCN: = DBMS_FLASHBACK. GET_SYSTEM_CHANGE_nUMBER();
    DBMS_APPLY_ADM. () SET_TABLE_INSTANTIATION_SCN
    source_object_name => ' HR. EMPLOYEES,
    source_database_name-online "MY."
    instantiation_scn-online iscn);
    END;
    /

    Conn strmadmin/flow

    SET SERVEROUTPUT ON
    DECLARE
    emp_rule_name_dml VARCHAR2 (30);
    emp_rule_name_ddl VARCHAR2 (30);
    BEGIN
    DBMS_STREAMS_ADM. () ADD_TABLE_RULES
    table_name-online "hr.employees."
    streams_type-online "apply."
    streams_name-online "apply_emp."
    queue_name-online "strmadmin.streams_queue."
    include_dml to-online true.
    include_ddl-online fake,
    source_database-online "my."
    dml_rule_name-online emp_rule_name_dml,
    ddl_rule_name-online emp_rule_name_ddl);
         
    DBMS_OUTPUT. Put_line (' name of the DML rule: ' | emp_rule_name_dml);
    DBMS_OUTPUT. Put_line (' DDL rule name: ' | emp_rule_name_ddl);
    END;
    /

    BEGIN
    DBMS_APPLY_ADM. SET_PARAMETER)
    apply_name-online "apply_emp."
    parameter-online "disable_on_error."
    value => ' n ");"
    END;
    /

    SELECT a.apply_name, a.rule_set_name, r.rule_owner, r.rule_name
    Dba_apply a, dba_streams_rules r
    WHERE the a.rule_set_name = r.rule_set_name;

    -Select nom_regle value and write below - field example group16

    BEGIN
    DBMS_STREAMS_ADM. RENAME_TABLE)
    nom_regle => ' STRMADMIN. TEMPS14 ',.
    from_table_name => ' HR. EMPLOYEES,
    to_table_name => ' DEMO. EMP',.
    operation => "ADD"); -can be ADD or REMOVE
    END;
    /

    BEGIN
    DBMS_APPLY_ADM. () START_APPLY
    apply_name-online 'apply_emp');
    END;
    /

    BEGIN
    DBMS_CAPTURE_ADM. () START_CAPTURE
    capture_name-online 'capture_emp');
    END;
    /

    change user HR identified per hour;

    change the hr user account unlock;

    Conn h/h

    Insert the values of the employees
    (400, "Ilqar", "Ibrahimov', '[email protected]', '123456789', sysdate, 'ST_MAN', 0, 30000, 110, 110);

    Insert the values of the employees
    (500, 'Ulfet', 'Tanriverdiyev', '[email protected]', '123456789', sysdate, 'ST_MAN', 0, 30000, 110, 110);

    Conn demo/demo

    grant all on emp to the public;

    Select last_name, first_name from emp where employee_id = 300;

    strmadmin/streams
    Select apply_name, nom_file_attente, dba_apply;

    Select capture_name, dba_capture State;

  • GoldenGate and streams at the same time?

    Hi people,

    Today I have the replication data flow source to the target database B and which works very well. Now, I have a new schema and want to use GoldenGate to replicate tables in this schema from source to target B database database.

    Has anyone tried this before? I am concerned about the way the archive logs is managed to A database, for example. If streams on the database that has decided that it is done with archive log X, could he get rid of him before GoldenGate can read? Are there other problems?

    Yes, it's certainly doable. You can have OGG and flow to replicate the same data.   If you use RMAN to move the old newspapers of archives files, then you want to record the process of extraction with the database to ensure that RMAN does not move the files of newspapers of the archives that he needs.  With integrated snippet, this is done automatically when you save.  With classic snippet, you can do this using the command SAVE [name] FROM LOGRETENTION.

  • Replication question break?

    Hello

    I am an engineer of VMware. We implement SRM with Dell storage. I'm new with replication of dell.

    We have completed the initial replication of volume 5. We made this site (LAN) primary replication only.

    My question is, can I start planning for replication of these volumes (on main site) before to move to the storage back to the site of the DR.?

    We can stop the replication?

    appreciate any suggestion.

    Thanks and greetings

    Tejas

    Yes you can.  Think before put you pause replication when you move the table allow you to finish all replication running.

    Make sure that you can TEST all the interfaces when its on the site of DR, so that the port 3260 is open.  Also, check the routers WAN to apply the ILO do not fragment.   MPLS tends to ignore by default.

  • replication of VPN with active failover / standby

    Hello world

    If ASA is the config of active failover / standby.

    If ASA Active VPN image, profile and plug-ins that will also replicate to ASA watch?

    or I have to do it manually on SAA standby?

    Concerning

    MAhesh

    The VPN image and profile are not replicated, you will have to do it manually.  Here is a list of which ends up in a configuration of active / standby stateful:

    • The NAT translation table

    • TCP connection States

    • The UDP connection States

    • The ARP table

    • The layer 2 bridge table (when it is running in transparent firewall mode)

    • The States of HTTP connection (if the HTTP replication is enabled)

    • The table ISAKMP / IPSec SA

    • The database of the GTP PDP connection

    --

    Please do not forget to rate and choose a good answer

  • Replication problem

    Hello

    I have a very weird and one urgent problem.

    (My Source database is 10gR2(10.2.0.3/RedHat4) and my target database is 11gR2(11.2.0.3/RedHat4)

    Database of source:* _

    extract E_DPRD
    Ogg@OSIRIS1 username, password xxx
    exttrail/oracle/product/goldengate/data/PD
    PURGEOLDEXTRACTS
    DBOPTIONS ALLOWUNUSEDCOLUMN
    TRANLOGOPTIONS ASMUSER sys@ASM_OSIRIS, ASMPASSWORD xxxx
    dynamic wildcardresolve
    TABLE OSIRIS.*;
    OSIRIS TABLEEXCLUDE. TOAD_PLAN_TABLE

    extract P_DPRD
    PassThru
    rmthost 10.206.30.129, mgrport 7840
    rmttrail/app/oracle/product/goldengate/data/PD
    table OSIRIS.*;

    Target databases:+ _

    extract P_DPRD
    PassThru
    rmthost 10.206.30.xxx, mgrport 7840
    rmttrail/app/oracle/product/goldengate/data/PD
    table OSIRIS.*;

    All processes are runing and it seems that the 2 databases are synchronized, but the problem is for 4 tables GG replicate all columns! It replicates just the PK and some columns...

    Kind regards

    Published by: vittel on June 7, 2012 16:53

    Published by: vittel on June 7, 2012 16:54

    Keep TABLEEXCLUDE OSIRIS. TOAD_PLAN_TABLE on top of TABLE OSIRIS.* in params.

    If you have enabled trandata now, there will be inconsistency of data on the target database.you need to go to the initial charge once more.

  • 8127: cannot create plan PAIR of ACTIVE STANDBY because another replication

    Hi all


    I'm trying to set the PAIR of EVE ACTIVE schema replication on the data store who already have two-way replication is defined on some tables.

    Datastore1, Datastore2 are with two-way replication set to a few tables, I want to have some assets/stand by schema replication between datastore1 and Datastore3, while trying to define the schema for replication PAIR of ACTIVE WATCH on Datastore1, I am faced with error below

    8127: cannot create plan PAIR of ACTIVE STANDBY because another replication

    Could you please is to know how to perform this Configuration

    This configuration is not possible. You can't mix Classic replication with replication (REPLICATION CREATE) active / standby pair (CREATE the STANDBY to ACTIVE PAIR). They are mutually exclusive.

    Chris

  • CDC GoldenGate & ODI

    We have an obligation to replicate data to a central database (Oracle) to several regional databases (also Oracle) in real time. The plan is to use the GoldenGate for this.

    I came across this white paper (http://www.oracle.com/technetwork/middleware/data-integrator/overview/odiee-km-for-oracle-goldengate-133579.pdf), which evokes the benefits to ODI CDC (Change Data Capture) data replication with GoldenGate. But I do not understand what specific benefit using ODI (with GoldenGate) here I would be happy. No data transformation is necessary during replication on the target in my case.

    That said, can someone help me understand if ODI can really help in my scenario, specifically? Also, what is the scenario in which the combination of ODI & GoldenGate can help and what he would make things better?

    If there is no processing involved, ODI CDC is not necessary. Integration is essentially an advantage for people who are looking to use ODI. For Oracle GoldenGate extraction process is more sophisticated, faster and cost less to General and CDC ODI.

    For more information on OGG (white papers) you can find them here: http://www.oracle.com/technetwork/middleware/goldengate/overview/index.html

  • Instantiation of the SNA to the level Table

    Hi DBAs,

    I have setup the replication unidirectional streams for 17 tables of a schema of destination in Oracle 11.1.0.7 database. As I used the data pump to move the tables of a database to the other so the tables were instantiated (I think). Streams of replications is AWESOME to work between the replicated tables.

    Now my question is - do I also set the YVERT instantiate manually as well using DBMS_APPLY_ADM. SET_TABLE_INSTANTIATION_SCN? If not, in the case of my capture or apply process is stopped for some time (as long as the archivelogs are available), after starting (capture or apply) do I need to instantiate the RCS again?

    Concerning
    -Samar-

    Whenever a flow line worked once then it means your instantiation is correct on both sides and you are consistent global_name. Subsequently until the end of time, you may forget the instantiation stuff firmly.

    The only case I know where you come back and play with the stuff of instantiation is when you force a resynchronization of no import/export table, but rather to use a full source insert without stopping the main outlet. In this case, you can play with the SNA to ignore, or reset and restart instantiation table on both sites to abandon capture changes in the lineups of the error.

  • How to exclude the replicatio level schema tables?

    Hello

    I'm working on the replication of the oracle10g data stream.
    My type of replication is "schema".
    If anyone can help me to undersatnd, how to exclude some replication of schema in function tables.


    Thank you
    Faziarain

    user498843 wrote:
    Hi what you mean this ID: 239623.1
    This s document_id in metalink or other things?

    MetaLink

  • Remove replication (missing catrepr.sql?)

    Before 10 g, you can run the catrepr.sql script to uninstall the replication of the catalog. This script seems to have been deleted in Oracle 10 g. Nobody knows what this was replaced by? Or there may be a new method for uninstalling replication?

    Thanks in advance!

    Hello

    catrepr. SQL script used to be present until 9.2.0.1 release (release of base 9.2). Of 9.2.0.2 patches from Oracle group disagreed with this script and there is no script alternative. Until 9.2.0.1 replication has been a feature of database which can be removed and re-added, and later versions from Oracle consider built-in functionality with the core of RDBMS replication and does not provide an option to remove the replication feature itself.

    You can remove the replication groups and configurations for replication but not replication packages, views, and base tables as they are considered part of the Oracle data dictionary.

    If you have the catrepr.sql of a 9.2.0.1 script a database always 8.1.7.4 working 9.2 releases or 10g release since there was no improvements or changes of 10g or later architects. It can be re-added using the catrep.sql script in all versions. However do not forget that Oracle does not suggest or support any withdrawal of the related database replication tables, views, and packages of 9.2.0.2 go.

    Hope that answers your questions, let me know if you have clarification on this.

    Thank you
    Florent

  • change replication add item

    Hello!!

    I need your help again...

    I defined a pattern of replication on my data store 'testds2' like this...

    CREATE THE REPLICATION testuser.prueba_replicacion
    ELEMENT test TABLE testuser.tab
    MASTER testds2 on '10.129.5.23 ';
    subscribed ds18 on '10.129.5.18 ';
    STORE testds2 ON '10.129.5.23 '; PORT 17003
    STORE ds18 ON '10.129.5.18 '; PORT 17012;

    I need to add a group of cache AWT for this replication schema...

    I tried to run the ALTER REPLICATION command, but I don't understand how it works...


    Command > REPLICATION ALTER PRUEBA_REPLICACION23TO18
    ELEMENT ALTER testds2 DATASTORE
    GROUP CACHE INCLUDE testuser. ROOT_CHILD;
    1001: syntax error in the SQL statement before or at the: "DATASTORE", the character position: 66
    ... RUEBA_REPLICACION23TO18 ALTER ELEMENT testds2 DATASTORE INCLUDE VAC...
    ^^^^^^^^^
    The command failed.
    Command >


    Help, please

    Your REPLICATION CREATE statement has too many semi-colons in there, but I guess that it's just a typo in your message. It should read:

    CREATE THE REPLICATION testuser.prueba_replicacion

    ELEMENT test TABLE testuser.tab

    MASTER testds2 on '10.129.5.23 '.

    subscribed ds18 on '10.129.5.18 '.

    STORE testds2 ON '10.129.5.23' PORT 17003

    STORE ds18 ON '10.129.5.18 '; PORT 17012;

    When you use the REPLICATION ALTER you alter the plan. However, you can use INCLUDE as you have defined a pattern of replication at the level of array legacy. INCLUDE (but EXCLUDE) only work with the legacy replication DATA store level or active / standby pair. If you want to add a cache AWT group in an inherited table-level plan (which not is not something we would recommend!) you must add each table in the Cache Group plan separately something like:

    REPLICATION ALTER testuser.prueba_replicacion

    Add ITEM awttab1 TABLE testuser.awttable1

    MASTER testds2 on '10.129.5.23 '.

    subscribed ds18 on '10.129.5.18 '.

    Add ITEM awttab2 TABLE testuser.awttable2

    MASTER testds2 on '10.129.5.23 '.

    Subscriber ds18 on 10.129.5.18 «»

    ... ;

    Please note that although you can include arrays cache AWT in patterns inherited from replication, this configuration is strongly discouraged since replication inherited can't stand not properly cached tables. This configuration will not guarantee to maintain consistency between the two groups of AWT cache or between Oracle and TimesTen. Also, after a consistent recovery failure can be complex!

    If you want to mix replication you must really use replication active / standby pair (COUPLE EVE ASSETS CREATE) as it fully supports READONLY and AWT and groups from cache to cache the groups and dies guarantee both to maintain consistency and allow a correct recovery.

    Chris

  • Triggers in Active-Active configurations

    Hello
    I'm new to Goldengate and look for best practices regarding database triggers during replication with Goldengate in an active configuration / asset.

    Best practices will not replicate tables that are filled by triggers or is some magical settings?

    Scenario: for databases, both with active users. Tab1 in the two databases has a trigger to update tab2. Best practices will simply reproduce tab1 with Goldengate?


    Thank you very much for your help!

    Hello

    I think this will help you...

    SUPPRESSTRIGGERS |
    NOSUPPRESSTRIGGERS
    Valid for Replicat for Oracle. Prevents the triggers to fire on target objects that are configured for replication with Oracle GoldenGate. You can use this setting for Oracle 10.2.0.5 and later patches for Oracle 11.2.0.2 and later, instead of manually disable triggers. To use this option, replicate user must be an administrator Oracle Streams, which can be granted by calling dbms_goldengate_auth.grant_admin_privilege.

Maybe you are looking for

  • Photo slide show

    I made a slideshow of pictures on my Imac and want to use the theme of Origami, but it cuts part of some photos. Other themes (say, Classica), is not. Is it possible could solve this problem, because the theme of Origami is more dynamic to viewers. v

  • How to move all messages in local folders?

    In my Thunderbird I have an IMAP account that is no longer existing on the remote mail server. All mail is on my local drive I can open it and browse. What I want is to move all messages in local folders and get rid of the old count IMAP. It's really

  • How fast on SD card to Camileo H20?

    That speed is the lowest speed to use with a Camileo H20 to ensure that registration is also smooth and fine as possible?

  • When I try to send several pictures by e-mail...

    .. .recipient gets mail empty or very tiny images that only opens upward.  Single images are fine.  I guess it has to do with the size of the file, but I thought that apple had developed mail to send large attachments with a download link when necess

  • New HP desktop, Windows8, Ubuntu, UEFI and a desktop computer that is no longer start

    I have a new HP desktop computer purchased just before Christmas, 2012.  Not realizing the problems with Windows8 and UEFI, I tried to load the version 12.1, Ubuntu, and now the Office does not start. Here is the chronology of the events: 1A tried to