Physical read of Checkpoint/T-Logs of TimesTen IMDB

My data store has forced (vivid RAM) memory, and so I want to flush the data from IMDB to his Checkpoint/T-logs file with the idea to free the memory.

Yes, is there a any way by which data in IMDB can be deleted (freeing the memory), and on request, the data can be loaded into IMDB? Something like a physical disc of Oracle reading. I am aware that it is possible with the IMCD with aging concept. But my setup has no underlying oracle.

In other words, I want data from IMDB to be obsolete, and I need to persistent storage (checkpoint/T-logs) on request. First part of the question (age-out) in IMDB seems to be possible... and I need confirmation on the second part of the question.

Create new data for constraint violations store memory is not an option that the constraint of the memory is at the host level and not at the data store level.

Published by: Jeganlal on December 9, 2010 11:14

No it's not possible using standalone TimesTen. TimesTen is a database in memory, and there is no concept of some data only resides on the disk. The checkpoint field and newspapers are there purely for the dat in memory of persistence and transactionality. Either you need more memory or you need to switch to a scenario of IMDB Cache rather than standalone TImesTen.

Chris

Tags: Database

Similar Questions

  • "Physics read total multi block applications ' wrong?

    Hello

    The statistics "block the physical demands reading total multi" should count the number of calls to e/s over a reading block (doc here). So I expect to be equal to the 'file db scattered read' (or 'direct path read' if read series direct-path of access is used).

    But this isn't what I get in the following unit test:

    SQL > create table DEMO pctfree 99 as select rpad('x',1000,'x') n from dual connect by level < = 1000;

    -I get the v$ filestat statistics in order to make a difference at the end:

    SQL > select phyrds, phyblkrd, singleblkrds from v$ filestat when file #= 6;

    PHYRDS PHYBLKRD SINGLEBLKRDS

    ---------- ---------- ------------

    6291-96488-3286

    I'm doing a table scan complete on a table of 1000 blocks, forcing sql_tracing and stamped readings

    SQL > connect demo/demo

    Connected.

    SQL > alter session set "_serial_direct_read" = never;

    Modified session.

    SQL > alter session set events =' waiting sql_trace = true ";

    Modified session.

    SQL > select count (*) in the DEMO.

    COUNT (*)

    ----------

    1000

    SQL > alter session set events = 'off sql_trace. "

    session statistics see the 30 requests for e/s, but only 14 requests for multi block:

    SQL > select name, value of v$ mystat join v$ statname using(statistic#) whose name like "% phy" and the value > 0;

    NAME                                          VALUE

    ---------------------------------------- ----------

    total read physics applications of e/s 30

    physical read total multi block 14 requests

    total number of physical read 8192000 bytes

    physical reads 1000

    physical reads cache 1000

    physical read IO request 30

    bytes read physical 8192000

    physical reads cache prefetch 970

    However, I did 30 'db file squattered read:

    SQL > select event, total_waits from v$ session_evenement where sid = sys_context ('userenv', 'sid');

    TOTAL_WAITS EVENT

    ---------------------------------------- -----------

    File disk IO 1 operations

    1 log file sync

    db file scattered read 30

    SQL * Net message to client 20

    SQL * Net client message 19

    V$ FILESTAT counts them as multiblock reads as follows:

    SQL > Select phyrds-6291 phyrds, phyblkrd-96488 phyblkrd, singleblkrds-3286 singleblkrds in v$ filestat when file #= 6

    PHYRDS PHYBLKRD SINGLEBLKRDS

    ---------- ---------- ------------

    30       1000            0

    And this is confirmed by sql_trace

    SQL > column tracefile new_value by trace file

    SQL > select the process trace file $ v where addr = (select paddr in session $ v where sid = sys_context ('USERENV', 'SID'));

    TRACE FILE

    ------------------------------------------------------------------------

    /U01/app/Oracle/diag/RDBMS/dbvs103/DBVS103/trace/DBVS103_ora_31822.TRC

    SQL > host mv and tracefile series-live - path.trc

    SQL > host grep ^ WAITING series-live - path.trc | grep read | NL

    1. WAIT #139711696129328: nam = "db-scattered files reading" ela = 456 file #= 6 block #= 5723 blocks = 5 obj #= tim 95361 = 48370540177

    2. WAIT #139711696129328: nam = "db-scattered files reading" ela = 397 file #= 6 block #= 5728 blocks = 8 obj #= tim 95361 = 48370542452

    3. wait FOR #139711696129328: nam = "db-scattered files reading" ela = 449 file #= 6 block #= 5737 blocks = 7 obj #= tim 95361 = 48370543216

    4 WAIT #139711696129328: nam = "db-scattered files reading" ela = 472 file #= 6 block #= 5744 blocks = 8 obj #= tim 95361 = 48370543816

    5. WAIT #139711696129328: nam = "db-scattered files reading" ela = 334 file #= 6 block #= 5753 blocks = 7 obj #= tim 95361 = 48370544276

    6 WAIT #139711696129328: nam = "db-scattered files reading" ela = 425 file #= 6 block #= 5888 blocks = 8 obj #= tim 95361 = 48370544848

    7 WAIT #139711696129328: nam = "db-scattered files reading" ela = 304 case #= 6 block #= 5897 blocks = 7 obj #= tim 95361 = 48370545370

    8 WAIT #139711696129328: nam = "db-scattered files reading" ela = 599 file #= 6 block #= 5904 blocks = 8 obj #= tim 95361 = 48370546190

    9 WAIT #139711696129328: nam = "db-scattered files reading" ela = 361 file #= 6 block #= 5913 blocks = 7 obj #= tim 95361 = 48370546682

    10 WAIT #139711696129328: nam = "db-scattered files reading" ela = 407 file #= 6 block #= 5920 blocks = 8 obj #= tim 95361 = 48370547224

    11 WAIT #139711696129328: nam = "db-scattered files reading" ela = 359 file #= 6 block #= 5929 blocks = 7 obj #= tim 95361 = 48370547697

    12 WAIT #139711696129328: nam = "db-scattered files reading" ela = 381 file #= 6 block #= 5936 blocks = 8 obj #= tim 95361 = 48370548287

    13 WAIT #139711696129328: nam = "db-scattered files reading" ela = 362 files #= 6 block #= 6345 blocks = 7 obj #= tim 95361 = 48370548762

    14 WAIT #139711696129328: nam = "db-scattered files reading" ela = 355 file #= 6 block # 6352 blocks = 8 obj #= tim 95361 = 48370549218

    15 WAIT #139711696129328: nam = "db-scattered files reading" ela = 439 file #= 6 block #= 6361 blocks = 7 obj #= tim 95361 = 48370549765

    16 WAIT #139711696129328: nam = "db-scattered files reading" ela = 370 file #= 6 block #= 6368 blocks = 8 obj #= tim 95361 = 48370550276

    17 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1379 file #= 6 block #= 7170 blocks = 66 obj #= tim 95361 = 48370552358

    18 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1205 file #= 6 block #= 7236 blocks = 60 obj #= tim 95361 = 48370554221

    19 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1356 file #= 6 block #= 7298 blocks = 66 obj #= tim 95361 = 48370556081

    20 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1385 file #= 6 block #= 7364 blocks = 60 obj #= tim 95361 = 48370557969

    21 WAIT #139711696129328: nam = "db-scattered files reading" ela = 832 file #= 6 block #= 7426 blocks = 66 obj #= tim 95361 = 48370560016

    22 WAITING #139711696129328: nam = "db-scattered files reading" ela = 1310 file #= 6 block #= 7492 blocks = 60 obj #= tim 95361 = 48370563004

    23 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1315 file #= 6 block # 9602 blocks = 66 obj #= tim 95361 = 48370564728

    24 WAIT #139711696129328: nam = "db-scattered files reading" ela = 420 file #= 6 block # 9668 blocks = 60 obj #= tim 95361 = 48370565786

    25 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1218 file #= 6 block # 9730 blocks = 66 obj #= tim 95361 = 48370568282

    26 WAIT #139711696129328: nam = "db-scattered files reading" ela = 1041 file #= 6 block #= 9796 blocks = 60 obj #= tim 95361 = 48370569809

    27 WAIT #139711696129328: nam = "db-scattered files reading" ela = file No. 300 = 6 block #= 9858 blocks = 66 obj #= tim 95361 = 48370570501

    28 WAIT #139711696129328: nam = "db-scattered files reading" ela = 281 file #= 6 block #= 9924 blocks = 60 obj #= tim 95361 = 48370571248

    29 WAIT #139711696129328: nam = "db-scattered files reading" ela = 305 file #= 6 block #= 9986 blocks = 66 obj #= tim 95361 = 48370572021

    30 WAIT #139711696129328: nam = "db-scattered files reading" ela = 347 file #= 6 block #= 10052 blocks = 60 obj #= tim 95361 = 48370573387

    So, I'm sure that I did 30 reading diluvium but block total multi physical read request = 14

    No idea why?

    Thanks in advance,

    Franck.

    Couldn't leave it alone.

    A ran some additional tests with sizes of different measure, with and without the SAMS.

    It seems that my platform includes no readings of scattered file db less than 128KB in the physical count. (or 16 blocks, given that I was with a block size of 8 KB).

    Concerning

    Jonathan Lewis

  • The Oracle trace: number of physical reads of the signifier of disk buffer

    Hi all


    Since yesterday, I was under the impression that, in a trace file Oracle, the "number of physical buffer of disk reads" should be reflected in the wait events section.

    Yesterday we add this trace file (Oracle 11 g 2):

    call the query of disc elapsed to cpu count current lines

    ------- ------  -------- ---------- ---------- ---------- ----------  ----------

    Parse        1      0.00       0.00          0          0          0           0

    Run 1 0.04 0.02 0 87 0 0

    Get 9 1.96 7.81 65957 174756 0 873

    ------- ------  -------- ---------- ---------- ---------- ----------  ----------

    total 11 2.01 7.84 65957 174843 0 873

    Elapsed time are waiting on the following events:

    Event waited on times max wait for the Total WHEREAS

    ----------------------------------------   Waited  ----------  ------------

    SQL * Net message to client 9 0.00 0.00

    reliable message 1 0.00 0.00

    ENQ: KO - checkpoint fast object 1 0.00 0.00

    direct path read 5074 0.05 5,88

    SQL * Net more data to the customer 5 0.00 0.00

    SQL * Net client message 9 0.01 0.00

    ********************************************************************************

    We can see that the 65957 physical disk reads resulted only 5074 direct path read. Normally, the number of physical reads from disk is more directly reflected in the wait events section.

    Is this normal? Why is that? Maybe because these discs are on a San have a cache?

    Best regards.

    Carl

    direct path read is an operation of e/s diluvium, is not just to read 1 block

  • question about physical reads

    RMAN I/o is taken into account in the stat of physical reads ?

    I mean, it's blocks read from disk by RMAN to backup the database is represented in physical reads statistics?

    One of our databases has a value for the IO requests total physical reading greater than physical reads , so I guess that the missing physical reads are RMAN.

    Thanks in advance

    blocks read from disk by RMAN to back up the database represented by physical reads statistics?

    Yes, I do. It is something that can be tested by looking at the session for the RMAN sessions statistics.

    Hemant K Collette

  • How to check the instance datafile I/O? solve the high physical reads/writes?

    Method-1
    identify 'hot spots' or I/O contention
    Select NAME,
    "Physical reads," PHYRDS
    round ((PHYRDS/PD.)) (PHYS_READS) * 100, 2) «Read %»,
    PHYWRTS "physical Scriptures."
    Round (PHYWRTS * 100/PD.) PHYS_WRTS, 2) "% of writing."
    FS. PHYBLKRD + FS. PHYBLKWRT "Total block I / O of.
    de)
    Select sum (PHYRDS) PHYS_READS,
    Sum (PHYWRTS) PHYS_WRTS
    v $ filestat
    ) pd,.
    v$ datafile df,.
    v$ filestat fs
    where df. FILE # = fs. FILE NO.
    ranking by fs. PHYBLKRD + fs. PHYBLKWRT / / desc


    Another method-
    On Oracle10g, CWA also provides the dba_hist_filestatxs table to track the disk i/o:

    break on begin_interval_time jump 2

    column phyrds 999 999 999 format
    column begin_interval_time format a25

    Select
    begin_interval_time,
    file name,
    phyrds
    Of
    dba_hist_filestatxs
    natural join
    dba_hist_snapshot;

    It is - method that you use to check for errors in data I/O file? AND how to solve high physical reads and writes?

    Ankit Ashok Aggarwal wrote:
    AWR project stat e/s in terms of segments, query and etc.
    Here, my concern is what should be the value of ideal threashold for a DBA to act on the high physical reads/writes?
    and how he can fix this problem if it affects database performance?

    I don't think that there is any possibility to mention that the amount of an e/s is going to be good or bad for a database in general. The reason is that it is not quantifiable. For example, in a store, maybe one day that no one gets and another day, there is no place to stand (because there was a sale announced by the store). So now, how do you say that it was not necessary? Rather than search for best practice values, it is best that you keep track of the normal functioning of your db and when you see a special summit in the values of events scattered user IO as DB file read, DB file parallel read etc and when this happens, you should check back the work you are doing and see if that can be refined somehow.

    Aman...

  • more physical reads the blocks (in this partition)

    Hi all

    I have the partitioned table hash on an indexed column 'id', which is non-unique and a part of my primary key. Inside each partition, lines with the same id are located close to each other which is done by dbms_redefinition reorg using orderby_cols. The intention is to reduce the amount of physical reads as there are no requests that don't filter on the column id.

    However, what I see is a large number of physical reads. The first partition has roughly 80K lines, a length of 347, block size of 8K and compression mean line... resulting in 821 blocks. And when (after flushing cache in the memory buffer and shared pool) submit a query that filters on 'id' only and results is 106 selected lines, I see to roughly 1400 physical reads.

    --------------------------------------------------------------------------------------------------------------------------------
    | ID | Operation | Name | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |
    --------------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | 106. 36782 | 3 (0) | 00:00:01 |
    | 1. SIMPLE HASH PARTITION | 106. 36782 | 3 (0) | 00:00:01 | 1. 1.
    | 2. TABLE ACCESS BY LOCAL INDEX ROWID | XXX MOVIES | 106. 36782 | 3 (0) | 00:00:01 | 1. 1.
    |* 3 | INDEX RANGE SCAN | XXXXX | 1 | | 1 (0) | 00:00:01 | 1. 1.
    --------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):
    ---------------------------------------------------

    3 - access ("ID" = 49743)


    Statistics
    ----------------------------------------------------------
    22243 recursive calls
    0 db block Gets
    66651 compatible Gets
    physical reads 1404
    0 redo size
    10933 bytes sent via SQL * Net to client
    299 bytes received via SQL * Net from client
    9 SQL * Net back and forth to and from the client
    150 sorts (memory)
    0 sorts (disk)
    106 lines processed


    I expect to see around 10-15 physical reads. And I don't understand how it can be 1400 physical reads with only 821 blocks in this partition. Can someone explain this behavior to me?

    Thanks in advance,
    Dirk

    DRabbit wrote:
    However, what I see is a large number of physical reads. The first partition has roughly 80K lines, a length of 347, block size of 8K and compression mean line... resulting in 821 blocks. And when (after flushing cache in the memory buffer and shared pool) submit a query that filters on 'id' only and results is 106 selected lines, I see to roughly 1400 physical reads.

    --------------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                          | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    --------------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                   |       |   106 | 36782 |     3   (0)| 00:00:01 |       |       |
    |   1 |  PARTITION HASH SINGLE             |       |   106 | 36782 |     3   (0)| 00:00:01 |     1 |     1 |
    |   2 |   TABLE ACCESS BY LOCAL INDEX ROWID| XXX   |   106 | 36782 |     3   (0)| 00:00:01 |     1 |     1 |
    |*  3 |    INDEX RANGE SCAN                | XXXXX |     1 |       |     1   (0)| 00:00:01 |     1 |     1 |
    --------------------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
    3 - access("ID"=49743)
    
    Statistiken
    ----------------------------------------------------------
    22243  recursive calls
    0  db block gets
    66651  consistent gets
    1404  physical reads
    0  redo size
    10933  bytes sent via SQL*Net to client
    299  bytes received via SQL*Net from client
    9  SQL*Net roundtrips to/from client
    150  sorts (memory)
    0  sorts (disk)
    106  rows processed
    

    For me, the most important statistic is recursive calls. Even for a partitioned table with (for example) a few thousands of partitions, this seems very high for a simple analysis and perform call that eventually access a single partition known by index.

    Test strategies - rerun the query immediately to see if you get similar results, run the query again ONLY after a buffer_cache (not shared_pool) hunt.
    It is possible that more work on optimization, or calls to (for example) a pl/sql function embedded. The behavior of the repeated executions can give you a clue on this subject.
    (And if none of these options can help, then activation trace and check the trace file should help).

    Concerning
    Jonathan Lewis

  • physical reads...

    Hello
    I want to gather statistics on special applications and want to know how many physical reads this query do?
    How can I get that... Statistics v$ sysstat gives a long list...

    Thank you... at all

    Here are the different options that you can use with AUTOTRACE:

    SET OFF AUTOTRACE - cut the AUTOTRACE
    SET AUTOTRACE WE EXPLAIN - show that the optimizer execution path
    SET AUTOTRACE ON STATISTICS - display only the execution statistics
    SET AUTOTRACE ON - show both the optimizer execution path
    and the execution statistics
    SET AUTOTRACE TRACEONLY - like SET AUTOTRACE ON, but remove output

  • Physical read

    Hello

    Read how to calculate physical and logical to read?

    THX.

    Hello

    You can have a quick glance at the V$ SEGMENT_STATISTICS and the filter on the STATISTIC_NAME column.

    For example, if you want physical reads and logical reads about a TABLETEST segment (for example :)), you can use this query:

    select OBJECT_NAME,SUBOBJECT_NAME,statistic_name,value
    from V$SEGMENT_STATISTICS
    where statistic_name in ('physical reads','logical reads')
    order by 5
    

    If you want historical data, you can take a look at the DBA_HIST_SEG_STAT view.

    Laurent

  • What is most commonly used for the Cache Timesten IMDB?

    Hello
    TimesTen focuses on the description of table based on cache groups and there is no way to load a schema at the same time.

    Is this means that it is designed to be cached especially a small number of large tables than on the tables of any scheme, support something more similar to an application?

    I would like to know what is your knowledge, the more common use of timesten.

    Manage manually a large number of table creates problems when some force is "disappeared" to timesten creating the CG.

    I tried to tables from a Primavera P6 project management schema cache (187 tables and other views more, etc.). Trying the scheme has reported an error about a missing relative to a constraint). Loading of a given subset resulted in very small number of set tables cached in reality due to the lack of other related tables.


    So if timesten is a very good solution, all the necessary step to get a consistently large number of cached tables seems to be is not so effective.

    Thank you
    Fabio Alfonso

    An application using TimesTen IMDB should be designed for TimesTen and the necessary hierarchy for it to load properly. Primavera is designed without a doubt not properly.

    Most people use the IMDB as a cache to environments of volume of very high transaction or in environments where peak loads exceed the ability to execute transactions directly to a RDBMS on disk.

  • Tool for reading the PCOIP server logs?

    I was wondering if there are viewers of the newspaper or the tools to read the location a little easier pcoip server logs.

    c:\programdata\vmware\vdm\logs

    This is a nice tool, aside from that you can use the tools of teradici

    Tools for deployment of PCoIP Protocol (15134-1082)

    Keep this article handy if you try ti solve the problems of disconnection

    http://KB.VMware.com/kb/2012101

  • Top 10 SQL by physical reads

    Dear experts,

    Version of Oracle - 11.1.0.7

    Do we not have a way to draw 10 albums SQL (physical readings) for a period of 24 hours based on the AWR data or other points of view V$.

    Thank you

    DBA_HIST_SQLSTAT, all you can say is that was the user of the analysis.
    Not the user running.

    DBA_HIST_SQLSTAT is therefore insufficient for queries that are executed several times by several users, and there is not a lot of solution.

    You could have a glance in the underlying data from the ASHES - V$ ACTIVE_SESSION_HISTORY and DBA_HIST_ACTIVE_SESS_HISTORY which is respectively sampled data and a sample of a sample and it could give an indication of who is running.

    Published by: Dom Brooks on July 19, 2012 10:18
    Part of sentence was missing

  • Reading of vm-support logs

    Manjo linux muito NAO, nao was os caminhos para to ler os newspapers make esx.

    Then gostaria of sabre os caminhos but basicos para to rel I journal do o esx VM-SUPPORT person.

    Como por exemplo, como poderia ler no log o por um ESX congelou Québec.

    Obrigado

    Julio,

    ______

    If you found this information useful, please consider giving points (correct or useful).

    ISSO e an interpretacao Código 0/7 0 x 0 0 x 0 0 x 0

    Aqui tem some instrucoes para interpretar SCSI codes detection:

    http://KB.VMware.com/kb/289902

    Marcelo Soares

    VMWare Certified Professional 310

    Technical Support Engineer

    Chief Executive Officer of the Linux server

  • Help! My original disk offline files are copied in my reader users when I log on their machine!

    I have users who use the offline files feature so that they can work on files of network when the network.  After using my credential and working on a user's computer yesterday, copies of my files from home drive now appears in the user drive home.  Of course, this is not acceptable.  It could reveal sensitive or confidential information to the wrong person.

    Can someone tell me how this can happen?  The offline files cache should not be shared between different users of the same machine, right?  I connect in machines of so many people while ensuring support and cannot have copied my personal files in the home directories of others.  FYI, remap us the My Documents shortcut to the user's home or the H: drive.

    What can I do to prevent that from happening again?

    Hi IT_John,

    Work on a domain network?

    The offline files cache should not be shared between different users of the same machine, right? Yes, as long as it is configured correctly, you can check the steps mentioned in the link below

    Managing files and folders

    http://TechNet.Microsoft.com/en-us/library/bb457104.aspx

    See the article below, just for your reference, which deals with a similar question

    Files that you add to the folder on a computer running Windows XP offline files are synchronized when another person uses the computer

    http://support.Microsoft.com/kb/811660

    Thank you, and in what concerns:

    Ajay K

    Microsoft Answers Support Engineer

    Visit our Microsoft answers feedback Forum and let us know what you think.

  • Physical reads in the data base

    Dear Experts,

    I need to get separate SQL ID for the SQLs executed by a particular user - "PROD" b/w 09:00-17:00 (snap ID 1000-1008). I wrote below SQL:

    Select sql_id distinct from DBA_HIST_SQLSTAT where PARSING_SCHEMA_NAME = "PROD" and snap_id between 1000 and 1008

    Everyone sees a reason why this can be bad?

    Thanks for sharing of the entries.

    Select sql_id distinct from DBA_HIST_SQLSTAT where PARSING_SCHEMA_NAME = "PROD" and snap_id between 1000 and 1008
    Everyone sees a reason why this can be bad?

    request looks good, it makes you want you need.

  • Reader only a physical standby Clock Data Guard?

    Hi all

    We try to use TimesTen 11.2.2.4.1 as a memory cache read-only for a 11.2.3.0.7 on Linus RedHat 6.3 Oracle schema, all using Oracle Data Guard to replicate the Oracle instance through geographically remote sites. At each site, we would like to have two instances of TT, synchronization with the local instance of Oracle 11 g. It works very well against the master DB, but TT agents will be able to synchronize against physical standby instances?

    The problem, seems, is that the agent of TT uses structures dedicated in the master Oracle (related to the grid of cache), instance that will be replicated to instances of Eve. Is TT agent able to use the read-only, structures to complete replication synchronization, or is this unworkable approach? What would be your advisor as a way to do this?

    Thanks for your help,

    Chris

    No, I'm afraid that will not work and is not supported. We support the use of groups of READONLY and AWT cache in collaboration with Data Guard in the following configuration (documented in the Guide of Cache TimesTen IMDB):

    1 data Guard is configured as synchronous physical standby

    2. all the caches of TT run against Oracle DB

    3. in the event of a failover of Data Guard, TT hiding can be migrated to run against the DB who now has beed promoted active to standby

    It is the only supported configuration.

    Kind regards

    Chris

Maybe you are looking for

  • A problem with the lock 10 - Bug ios screen?

    Bug: Steps to follow: 1. slide right on the lock to go to Widget screen screen 2. scroll and click 'CHANGE '. 3. Requests that an access code 4. change the Widgets in the "ADD WIDGETS" screen 5. click on done 6. the onus is on the screen of Widgets 7

  • What macbook to buy?

    I have a late 2011 Macbook pro with a processor i5 2.4 GHz, 8 GB of RAM and a 256 gb SSD, I put in me. It works really well and I'm perfectly happy with the speed. What model would now give similar performance? I'm looking for the cheapest option and

  • iTunes does not charge enough when synced to my car

    iTunes does not charge enough when synced to my car

  • error niSwitch_Init

    When you call the function niSwitch_Init from Visual C++, you receive a BFFC0011 error for the PXI-2567 module.  In MAX self-test passes and I use the configuration correct alias Max in my function call. My computer information: Vista OR-Switch 3.8 M

  • Database does not connect LV2014

    Hi all Sorry I'm new to databases, but I spent the day trying to get LabVIEW 2014 to connect to a database from Microsoft Access 2013. I tried a .mdb and .accdb but no work. I tried to do a udl but it was 64-bit and would not load, I tried the fix on