Several replicated Cache Services

I have 5 members in my group. I wish I had 2 caches replied:

Memory cache 1: reproduced more members (1,2)
Cache 2: Reproduced more members (1,2,3,4,5)

Is this possible?
Thank you.

Hello

I missed the part of services for replicated in the configuration system. Here you go:

Configuration on members 1.2:


          
               RepL1
               RepL1               
               
                                        
               

               true
          

          
               RepL2
               RepL2               
               
                    
               

               true
          

Configuration on 3,4,5 members:
N ' will not cache mapping for lettres2 and Schema element replicated for repl2

(Application for node) clients should have the knowledge of the repl1 and repl2.

I hope this helps!

See you soon,.
NJ

Published by: user738616 on April 20, 2012 09:28

Tags: Fusion Middleware

Similar Questions

  • Replicated Caches and POF not yet fully established in 3.4.1 / 3.4p2

    Hello

    Replicated systems with separate PofSerializers still do not work for us in line 3.4.1b407 or 3.4b405p2.

    We use a system replicated with a custom serializer component (a ConfigurablePofContext). At startup, the associated replicated plan service seems to load the ConfigurablePofContext but there is no attempt to use the PofSerializers specified in the XML configuration. Of course change our business objects to implement PortableObject directly not work around the problem.

    Anyone have any better ideas? Everyone's bored with PofSerializers?

    Thank you
    Tim

    Hi Tim,.

    Could you tell me how you run your test? ... how many knots, etc...

    If I don't just have your TestMain so I don't see your called PofSerializer. This is because the data takes place locally in the form of the object in the replicated cache and should never be serialized unless it needs to be replicated to another node. If I start several knots then I see your called TestValueSerializer. Could this be what you see?

    "The ' java.lang.IllegalArgumentException: resource is not serializable" is certainly an issue and I will open a topic for it. Of course, you can work around this by making your Serializable object.

    Let me know...

    Thank you
    Tom

  • Access and update values in replicated Cache

    I'm on the underside of test in a single node with the use of the replicated cache

    (1) store a Test object in the cache of key = 1
    (2) get twice the 1 button and assign to two variables such as recovery and two Extraction.
    (3) update the recovery of two
    (4) the recovery of two update has been applied to research a too
    DefaultCacheServer.start();
    
    NamedCache cache = CacheFactory.getCache("Test");
    
    // Store the value in to the replicated cache with Num = 1         
    TestObject obj = new TestObject();
    obj.setNum(1);
    cache.put(1, obj);
             
    // Retrieve the same value twice for the Key = 1
    TestObject retrievalOne = (TestObject)cache.get(1);
    TestObject retrievalTwo = (TestObject)cache.get(1);
    
    // Print the value for the Retrieval One        
    System.out.println("Retrieval One (before Retrieval One updated) : "+ retrievalOne.getNum());
             
    // Updating the retrieval Two
    retrievalTwo.setNum(2);
    
    // Print the value for the Retrieval One again        
    System.out.println("Retrieval One (after Retrieval One updated) : "+ retrievalOne.getNum());
    Output
    Retrieval One (after Retrieval One updated) : 1
    Retrieval One (after Retrieval One updated) : 2
    So in the above code I did cache.put to update the cache of coherence, but update the two recovery locally made cache and recoveryemphasis.

    Is it possible to prevent changes to the object become visible before the cache.put (updatedObject) on the same node (while using a cache without cloning objects and replicated locally).

    my config of cache:
    <replicated-scheme>
         <scheme-name>data</scheme-name>
         <service-name>Cache_for_data</service-name>
         <serializer>
              <class-name>
                   com.tangosol.io.pof.ConfigurablePofContext
              </class-name>
              <init-params>
                   <init-param>
                        <param-type>string</param-type>
                        <param-value>pof-config.xml</param-value>
                   </init-param>
              </init-params>
         </serializer>     
         <lease-granularity>thread</lease-granularity>
         <thread-count>10</thread-count>
         <backing-map-scheme>
              <local-scheme />
         </backing-map-scheme>
         <autostart>true</autostart>
    </replicated-scheme>

    Hi Eric,.

    the replicated cache maintains the Java objects and returns them as they are, until a code changes the cache entry.

    What you see is the result of retrievalOne and retrievalTwo is the same object, if there is no change after the first call to the get() method.

    In addition, it is not only your thread may have a reference to this object but the other wires in the same JVM, too.

    This is why the objects returned by a replicated cache, near cache or local cache are not safe to be changed, you should put new objects in the cache of the key even with cache.put ().

    Best regards

    Robert

  • Error in a replicated Cache object

    Hello
    I'm under a single node for consistency with replicated cache. But when I try to add an object to it, I get the exception below. However, I get this error while doing the same thing in a distributed cache. Can someone please tell me what I do wrong here.

    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
    in java.net.URLClassLoader$ 1.run(URLClassLoader.java:200)
    at java.security.AccessController.doPrivileged (Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
    at java.lang.Class.forName0 (Native Method)
    at java.lang.Class.forName(Class.java:242)
    at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
    at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
    at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
    at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ ConverterFromBinary.convert (CacheServiceProxy.CDB:4)
    to com.tangosol.util.ConverterCollections$ ConverterCacheMap.put (ConverterCollections.java:2433)
    at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ WrapperNamedCache.put (CacheServiceProxy.CDB:2)
    to com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$ PutRequest.onRun (NamedCacheFactory.CDB:6)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    to com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$ DaemonPool$ WrapperTask.run (Peer.CDB:9)
    to com.tangosol.coherence.component.util.DaemonPool$ WrapperTask.run (DaemonPool.CDB:32)
    to com.tangosol.coherence.component.util.DaemonPool$ Daemon.onNotify (DaemonPool.CDB:69)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:613)

    ClassLoader: java.net.URLClassLoader@b5f53a
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ ConverterFromBinary.convert (CacheServiceProxy.CDB:4)
    to com.tangosol.util.ConverterCollections$ ConverterCacheMap.put (ConverterCollections.java:2433)
    at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ WrapperNamedCache.put (CacheServiceProxy.CDB:2)
    to com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$ PutRequest.onRun (NamedCacheFactory.CDB:6)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    to com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$ DaemonPool$ WrapperTask.run (Peer.CDB:9)
    to com.tangosol.coherence.component.util.DaemonPool$ WrapperTask.run (DaemonPool.CDB:32)
    to com.tangosol.coherence.component.util.DaemonPool$ Daemon.onNotify (DaemonPool.CDB:69)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:613)


    This is my config file -

    <>cache-config
    < cache-system-mapping >
    <>cache-mapping
    < name of cache - > * < / cache-name >
    < scheme name > - MY-reply-cache-system < / system-name >
    < / cache-mapping >
    < / cache-system-mapping >

    <>- cached patterns


    <!--
    Reproduces the caching scheme.
    ->

    < replicated system >
    < scheme name > - MY-reply-cache-system < / system-name >
    < service name > ReplicatedCache < / service-name >
    < support-map-plan >
    < local plan >
    < / local plan >
    < / support-map-plan >
    Member of < lease-granularity > < / lease-granularity >
    < autostart > true < / autostart >
    < / replicated system >


    < proxy-system >
    < service name > ExtendTcpProxyService < / service-name >
    < number of threads > 5 < / thread count >
    < Acceptor-config >
    <>tcp-Acceptor
    < address - >
    address <>server < / address >
    port < port > < / port >
    < / local-address >
    < receive-buffer-size > 768 k < / receive-buffer-size >
    <-> 768 k send buffer-size < / size of the buffer - send->
    < / tcp-Acceptor >
    < / Acceptor-config >
    < autostart > true < / autostart >
    < / proxy-system >
    < / cache-plans >
    < / cache-config >

    Published by: user1945969 on June 5, 2010 16:16

    By default, it should have used FIXED unit-calculator. But look at the track seems that your replicated cache used BINARY as unit-calculator.

    Could you try add FIXED in your configuration of cache for the cache to replicate.
    Or try just insert an object (the key and value) that implement binary.

    Check the part of the unit calculator on this link

    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Is replicated cache single thread?

    Hello

    Replicated cache scheme do has no thread < count > configuration option.
    1 can. the replicated system cause a survey of worker threads?
    2. what thread is used to get replicated, calling cache operations or a cache service event dispatch thread?
    3. where are executed the aggregation of entry (local or remote, calling node or event dispatch thread)?

    Thank you
    Alexey

    Alexey salvation,

    1. replicated cache is not currently using worker threads
    2, 3. 'Read' all operations are performed on the thread of the appellant

    Kind regards
    Gene

  • serializer custom replicated cache possible?

    Hi all

    I would like to know if it is possible to use a serializer custom on a replicated cache. I have it working successfully for a partitioned cache, but it doesn't seem to be a < serializer > tag to the duplicate version.

    Thank you
    Luke


    My excerpt from config.xml

    < distributed plan >
    partition of < scheme name > < / system-name >
    < service name > DistributedCache < / service-name >

    < serializer >
    > class name < com.tangosol.io.pof.ConfigurablePofContext < / class name >
    < init-params >
    < init-param >
    type string < param > < / param-type >
    < param-value of the property system = "pof - config.xml" > file:pof-config.xml < / param-value >
    < / init-param >
    < / init-params >
    < / serializer >

    < support-map-plan >
    schema < class >
    < system-Ref > by default-support-map < / plan-ref >
    < / class-system >
    < / support-map-plan >

    backup < number > 0 < / backup-County >
    < / distributed plan >

    < replicated system >
    default < scheme name > < / system-name >
    < service name > ReplicatedCache < / service-name >
    < support-map-plan >
    schema < class >
    < system-Ref > by default-support-map < / plan-ref >
    < / class-system >
    < / support-map-plan >
    < / replicated system >
    < / cache-plans >

    <!-card memory support, disable statistics - >
    schema < class >
    < scheme name > - by default-support-map < / system-name >
    > class name < com.tangosol.util.SafeHashMap < / class name >
    < / class-system >


    .. .the pof - config.xml extracted...

    Type < user >
    < type id > 105 < / id-type >
    > class name < AuditEvent < / class name >
    < serializer >
    > class name < AuditEventPOF < / class name >
    < / serializer >
    < / user type >


    .. .and what is at AuditEventPOF...

    / public class AuditEventPOF implements PofSerializer {}
    public void serialize (writer PofWriter, java.lang.Object o) {}
    public java.lang.Object deserialize (PofReader reader)
    }

    It is a known issue, which we do not have the document in the Coherence 3.4 Known Issues page. I just added to the page, and the fix will be a part of Patch 2, which will be published in a few days.

    Kind regards
    Gene

  • Caching Service broken for Macs?

    I ran on an issue that has published some time ago and is still a problem.

    It seems that the cache server does not work for any Mac that I get when they

    request for updates to the shop of MacApp.  It didn't matter if the update is

    a system update or and for an application update.

    However, any request for an iOS based app, no matter where the application

    comes, iTunes on a Mac or device iOS itself, is recognized, downloaded and cached

    and served on demand by a device iOS or iTunes on a Mac (Update pane in iTunes apps).

    On RARE occasions, an application of MacApp store is displayed in server log caching, but that ends

    by be the extent of it.

    One of the first reasons to use the caching server is save the download bandwidth and only

    need to download something once, but it does not work for the Mac (I have 5).  All the

    Mac is running 10.11.4 (10.11.3 before etc.).  The current version of the application server is 5.1, but

    This has happened for some time before this update (was hoping that this would finally fix it).

    The caching service worked as expected, as far as I could remember in Yosemite and

    front.

    Therefore, this feature actually eliminated?  Is it broken?  I have re-installed server, deleted

    caches and restarted, but had no success.

    In the "Country Restrictions" section of https://support.apple.com/en-us/HT204675 :

    • Download iTunes could not be cached if the IP address of the client is not associated with your area of the iTunes Store.

    This could be your problem?  In other words, the Apple ID, you use in the App Store is registered in the country where your cache server is running?

  • How can I check customers get updates from the caching Service?

    I've just set up a server 10.11 with Server.app version 5.0.15 to act as a provisioning server caching to provide updates to the clients Mac (no iOS device).

    I see that he has downloaded updates the Stats section when I chose bytes served in the drop down menu.

    I want to check that it works and that clients receive the updates of this sever when at the office.

    I did a lot of research but have not found anything definitive. I've seen suggestions may be looking at the Library/Server/Caching/Logs/Debug.log, but nothing in it can tell me whether or not a customer received an update of the caching Service. It is suggested in the Server Essentials of PeachPit book looking at Activity Monitor to check the packet network and instrumental, but which shows that the server successfully contacted Apple and checked the IP address.

    I need to know and to verify that this Service of caching works effectively by updates provided to customers on our network. Does anyone know how to do this?

    Thanks in advance.

    EDIT: I just checked the server again and downloaded only 226 MB of data. I created this yesterday afternoon. Shouldn't be much more data?

    https://help.Apple.com/ServerApp/Mac/5.0/?lang=en#/apd5E1AD52E-012B-4A41-8F21-8e 9EDA56583ALog

    LogLevel-> verbose
    I get enough information about the activities

  • Where can I download the dynamic Cache Service for Windows Server 2008 R2

    I found the helpful http://support.microsoft.com/kb/976618 note and want to d/l the dynamic Cache Service.

    Thank you

    Robert

    Hello

    Your question is more complex than what is generally answered in the Microsoft Answers forums. It is better suited for the IT Pro TechNet public. Please ask your question in the Forum on TechNet Support. You can follow the link to your question:

    http://social.technet.Microsoft.com/forums/en-us/winservergen/threads

  • Error 1053 while trying to start the police Cache Service Windows.

    original title: Windows Font Cache Service does not start

    When I turn on my computer Service of police Cache Windwos is stuck on startup so that it uses 25% of my cycles constently cpu. If I end the svchost.exe behind it then try and start it myself that I get error 1053 so vertually I have no control over the service at all I can not try to stop it restart, pause, or resume. I can disable so it does not even start when I turn on my computer and it helps, but the service should start and stop easily.

    I changed it to automatic (delayed start) and it started right at the top!
  • Need a clarification regarding the application of blocking replicated Cache

    Hello
    I need to know what will happen when the locking of the caches replicated when we have configured configuration lease-granularity member or a thread?

    Suppose in my set of 2 members of replicated caches and it extend customers who writes the cached data and another extended which data in caches for its calculation.

    Assume that my setup of the granularity of the lease is a member.
    My definition:
    1. when I say lock on a special touch in all two replicated caches that record with the special key will be locked and no other thread can update this key until I released the lock.
    2. in the course of my number blocking any readers can read the record to the above key.
    3. even if I update the record (while another player reading this record) by consistency it itself will take care of the update data without corrupting the file and give damaged data to the reader.

    Suppose that my setup is thread in granularity of lease
    1 when I say lock it will act as a lock on the thread that is running but not a lock for all replicated nodes.
    2. This is why when two writers say lock they will get the lock but they can update the same record at the same time, and it could make simultaneous update exception
    3 Eventhought writer has acquired the lock and while doing the update any reader can read the data relevant to this record.
    4. the update of data relevant to the particular key will be coherense handle (unless the simultaneous update exception doesnot appear) it self and any player can use data not currupted.


    Please correct me if I'm wrong. I need to know when we have configured member or thread in rental-granularity under implementation of this lock. According to the documentation I've seen replicated caceh supports the feature LOCK_ALL. hope this function means it locks all cache as a global lock.

    Hope you guys can show me a way to add/update records in line while some readers read the same cache.

    Kind regards
    Sura

    Published by: Sura August 3, 2011 20:25

    Published by: Sura August 4, 2011 00:26

    Hi Sura,

    If you use Clients range from updating the replicated Cache, it is recommended to have the following:

    (1) specify the granularity lease = "thread" for the replicated cache,
    (2) prohibit, by convention, explicit locks in any client of the replicated cache, and
    (3) allow customers to extend update the replicated cache via EntryProcessors (http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appbestextend.htm#CIHCJHFA)

    If the granularity "Member" value, once any thread in this member acquires a lock, another thread running in this node will have access to the lock (and any thread in this member can unlock the key). Having said this, lease-granularity = "thread" restricted access to a key to a single thread within the cluster.

    I hope this helps!

    See you soon,.
    NJ

  • Concurrency with replicated Cache entry processors

    Hello

    In the documentation says that entered the replicated cache processors are run on the original node.

    How concurrency is managed in this situation?
    What happens if two nodes or more are invited to run something on an entry at the same time?
    What happens if the node initiating enforcement is a storage node disabled?

    Thank you!

    Jonathan.Knight wrote:
    In a distributed cache entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes, so I think that one of the questions was what happens in this scenario. I presume that the only EP execues on one of the nodes - it would be unwise to run on all nodes - but which should I use? Is there still a concept of owner for a hiding replicated or it is random.

    At this point, I would have coded a quick experiment to prove what is happening, but unfortunately I'm a little busy right now.

    JK

    Hi Jonathan,.

    in the replicated cache, there are always a notion of possession of an entry, in terms of consistency, it is called a lease. It is still owned by the last node to perform a change successful thereon, where change could be has put/remove, but it can also be a lock operation. Granularity of lease is through entrance.

    Practically the lock operation in the code pasted Dimitri serves two purposes. First, it ensures that no other nodes cannot lock, second that it brings the lease to the lock node, so it can properly run the processor entry locally on the entry.

    Best regards

    Robert

  • Need info on replicated cache

    I have a simple request, we have a replicated cache that needs to be updated on a daily basis. If we call the api NamedCache.clear () before adding new items in the cache, the clear() method clears cache corresponding to other nodes in the cluster?


    Thank you
    Shashi

    Yes - NamedCache.clear () will clear your cache completely, regardless of the cache topology.

    As far as your client code is concerned, you can process cache as a map of hash unreleased and consistency will take care of the rest.

    HTH,

    ALEKS

  • Error in replicated cache (lazy write) led to the OOM error in live cache.

    We have an application that has a live cache and the data in the live cache is replicated on a cache of contingent. Result cache is implemented by enabling the cache write-back cache direct store.

    The problem is if the cache of the quota goes out of memory, direct cache fails to send data in the cache of contingent and keep requeuing data. Also the cache nodes fails to connect to the cache quota live, they keep on Retry to connect nodes from the cache immediately, this has also led to several exception on direct cache cluster. If the situation persists for cache some time then direct also fate memory and don't fail to answer.


    In this way contingent cache actually creates a direct cache problem. We want to ensure that if there is a problem in the cache now, this should not change the live cache. Any suggestion?

    Can you maybe post the details of configuration for these two caches?

    Maybe size limiting the possible cache will solve your problem?

  • The dynamic Cache Service Microsoft Windows is not resolved after you install the hotfix

    Installed hotfix http://www.microsoft.com/download/en/details.aspx?id=9258 on Win 2008 64 bit Enterprise edition.

    Dynamic Cache in good working Service Manager service. Changed the registry options, but showed no improvement in results.

    When copying or working with large documents (example: size 35 GB) Windows 2008 64 bit for Windows 2008 64 bit system memory shared network folder always go up as much as 9GB. But even when Windows 2003 32-bit system and data on Windows 2008 64 bit there is no problem.

    What are the possible ways to solve the problem of high memory consumption?

    Thank you
    Babu Nacka

    Hello

    Your question is more complex than what is generally answered in the Microsoft Answers forums. It is better suited for Windows Server on Technet. Please post your question in the Technet forums. You can follow the link to your question:

Maybe you are looking for