serializer custom replicated cache possible?

Hi all

I would like to know if it is possible to use a serializer custom on a replicated cache. I have it working successfully for a partitioned cache, but it doesn't seem to be a < serializer > tag to the duplicate version.

Thank you
Luke


My excerpt from config.xml

< distributed plan >
partition of < scheme name > < / system-name >
< service name > DistributedCache < / service-name >

< serializer >
> class name < com.tangosol.io.pof.ConfigurablePofContext < / class name >
< init-params >
< init-param >
type string < param > < / param-type >
< param-value of the property system = "pof - config.xml" > file:pof-config.xml < / param-value >
< / init-param >
< / init-params >
< / serializer >

< support-map-plan >
schema < class >
< system-Ref > by default-support-map < / plan-ref >
< / class-system >
< / support-map-plan >

backup < number > 0 < / backup-County >
< / distributed plan >

< replicated system >
default < scheme name > < / system-name >
< service name > ReplicatedCache < / service-name >
< support-map-plan >
schema < class >
< system-Ref > by default-support-map < / plan-ref >
< / class-system >
< / support-map-plan >
< / replicated system >
< / cache-plans >

<!-card memory support, disable statistics - >
schema < class >
< scheme name > - by default-support-map < / system-name >
> class name < com.tangosol.util.SafeHashMap < / class name >
< / class-system >


.. .the pof - config.xml extracted...

Type < user >
< type id > 105 < / id-type >
> class name < AuditEvent < / class name >
< serializer >
> class name < AuditEventPOF < / class name >
< / serializer >
< / user type >


.. .and what is at AuditEventPOF...

/ public class AuditEventPOF implements PofSerializer {}
public void serialize (writer PofWriter, java.lang.Object o) {}
public java.lang.Object deserialize (PofReader reader)
}

It is a known issue, which we do not have the document in the Coherence 3.4 Known Issues page. I just added to the page, and the fix will be a part of Patch 2, which will be published in a few days.

Kind regards
Gene

Tags: Fusion Middleware

Similar Questions

  • Access and update values in replicated Cache

    I'm on the underside of test in a single node with the use of the replicated cache

    (1) store a Test object in the cache of key = 1
    (2) get twice the 1 button and assign to two variables such as recovery and two Extraction.
    (3) update the recovery of two
    (4) the recovery of two update has been applied to research a too
    DefaultCacheServer.start();
    
    NamedCache cache = CacheFactory.getCache("Test");
    
    // Store the value in to the replicated cache with Num = 1         
    TestObject obj = new TestObject();
    obj.setNum(1);
    cache.put(1, obj);
             
    // Retrieve the same value twice for the Key = 1
    TestObject retrievalOne = (TestObject)cache.get(1);
    TestObject retrievalTwo = (TestObject)cache.get(1);
    
    // Print the value for the Retrieval One        
    System.out.println("Retrieval One (before Retrieval One updated) : "+ retrievalOne.getNum());
             
    // Updating the retrieval Two
    retrievalTwo.setNum(2);
    
    // Print the value for the Retrieval One again        
    System.out.println("Retrieval One (after Retrieval One updated) : "+ retrievalOne.getNum());
    Output
    Retrieval One (after Retrieval One updated) : 1
    Retrieval One (after Retrieval One updated) : 2
    So in the above code I did cache.put to update the cache of coherence, but update the two recovery locally made cache and recoveryemphasis.

    Is it possible to prevent changes to the object become visible before the cache.put (updatedObject) on the same node (while using a cache without cloning objects and replicated locally).

    my config of cache:
    <replicated-scheme>
         <scheme-name>data</scheme-name>
         <service-name>Cache_for_data</service-name>
         <serializer>
              <class-name>
                   com.tangosol.io.pof.ConfigurablePofContext
              </class-name>
              <init-params>
                   <init-param>
                        <param-type>string</param-type>
                        <param-value>pof-config.xml</param-value>
                   </init-param>
              </init-params>
         </serializer>     
         <lease-granularity>thread</lease-granularity>
         <thread-count>10</thread-count>
         <backing-map-scheme>
              <local-scheme />
         </backing-map-scheme>
         <autostart>true</autostart>
    </replicated-scheme>

    Hi Eric,.

    the replicated cache maintains the Java objects and returns them as they are, until a code changes the cache entry.

    What you see is the result of retrievalOne and retrievalTwo is the same object, if there is no change after the first call to the get() method.

    In addition, it is not only your thread may have a reference to this object but the other wires in the same JVM, too.

    This is why the objects returned by a replicated cache, near cache or local cache are not safe to be changed, you should put new objects in the cache of the key even with cache.put ().

    Best regards

    Robert

  • Replicated Caches and POF not yet fully established in 3.4.1 / 3.4p2

    Hello

    Replicated systems with separate PofSerializers still do not work for us in line 3.4.1b407 or 3.4b405p2.

    We use a system replicated with a custom serializer component (a ConfigurablePofContext). At startup, the associated replicated plan service seems to load the ConfigurablePofContext but there is no attempt to use the PofSerializers specified in the XML configuration. Of course change our business objects to implement PortableObject directly not work around the problem.

    Anyone have any better ideas? Everyone's bored with PofSerializers?

    Thank you
    Tim

    Hi Tim,.

    Could you tell me how you run your test? ... how many knots, etc...

    If I don't just have your TestMain so I don't see your called PofSerializer. This is because the data takes place locally in the form of the object in the replicated cache and should never be serialized unless it needs to be replicated to another node. If I start several knots then I see your called TestValueSerializer. Could this be what you see?

    "The ' java.lang.IllegalArgumentException: resource is not serializable" is certainly an issue and I will open a topic for it. Of course, you can work around this by making your Serializable object.

    Let me know...

    Thank you
    Tom

  • Need a clarification regarding the application of blocking replicated Cache

    Hello
    I need to know what will happen when the locking of the caches replicated when we have configured configuration lease-granularity member or a thread?

    Suppose in my set of 2 members of replicated caches and it extend customers who writes the cached data and another extended which data in caches for its calculation.

    Assume that my setup of the granularity of the lease is a member.
    My definition:
    1. when I say lock on a special touch in all two replicated caches that record with the special key will be locked and no other thread can update this key until I released the lock.
    2. in the course of my number blocking any readers can read the record to the above key.
    3. even if I update the record (while another player reading this record) by consistency it itself will take care of the update data without corrupting the file and give damaged data to the reader.

    Suppose that my setup is thread in granularity of lease
    1 when I say lock it will act as a lock on the thread that is running but not a lock for all replicated nodes.
    2. This is why when two writers say lock they will get the lock but they can update the same record at the same time, and it could make simultaneous update exception
    3 Eventhought writer has acquired the lock and while doing the update any reader can read the data relevant to this record.
    4. the update of data relevant to the particular key will be coherense handle (unless the simultaneous update exception doesnot appear) it self and any player can use data not currupted.


    Please correct me if I'm wrong. I need to know when we have configured member or thread in rental-granularity under implementation of this lock. According to the documentation I've seen replicated caceh supports the feature LOCK_ALL. hope this function means it locks all cache as a global lock.

    Hope you guys can show me a way to add/update records in line while some readers read the same cache.

    Kind regards
    Sura

    Published by: Sura August 3, 2011 20:25

    Published by: Sura August 4, 2011 00:26

    Hi Sura,

    If you use Clients range from updating the replicated Cache, it is recommended to have the following:

    (1) specify the granularity lease = "thread" for the replicated cache,
    (2) prohibit, by convention, explicit locks in any client of the replicated cache, and
    (3) allow customers to extend update the replicated cache via EntryProcessors (http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appbestextend.htm#CIHCJHFA)

    If the granularity "Member" value, once any thread in this member acquires a lock, another thread running in this node will have access to the lock (and any thread in this member can unlock the key). Having said this, lease-granularity = "thread" restricted access to a key to a single thread within the cluster.

    I hope this helps!

    See you soon,.
    NJ

  • Error in a replicated Cache object

    Hello
    I'm under a single node for consistency with replicated cache. But when I try to add an object to it, I get the exception below. However, I get this error while doing the same thing in a distributed cache. Can someone please tell me what I do wrong here.

    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
    in java.net.URLClassLoader$ 1.run(URLClassLoader.java:200)
    at java.security.AccessController.doPrivileged (Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
    at java.lang.Class.forName0 (Native Method)
    at java.lang.Class.forName(Class.java:242)
    at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
    at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
    at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
    at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
    at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ ConverterFromBinary.convert (CacheServiceProxy.CDB:4)
    to com.tangosol.util.ConverterCollections$ ConverterCacheMap.put (ConverterCollections.java:2433)
    at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ WrapperNamedCache.put (CacheServiceProxy.CDB:2)
    to com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$ PutRequest.onRun (NamedCacheFactory.CDB:6)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    to com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$ DaemonPool$ WrapperTask.run (Peer.CDB:9)
    to com.tangosol.coherence.component.util.DaemonPool$ WrapperTask.run (DaemonPool.CDB:32)
    to com.tangosol.coherence.component.util.DaemonPool$ Daemon.onNotify (DaemonPool.CDB:69)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:613)

    ClassLoader: java.net.URLClassLoader@b5f53a
    at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ ConverterFromBinary.convert (CacheServiceProxy.CDB:4)
    to com.tangosol.util.ConverterCollections$ ConverterCacheMap.put (ConverterCollections.java:2433)
    at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
    to com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ WrapperNamedCache.put (CacheServiceProxy.CDB:2)
    to com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$ PutRequest.onRun (NamedCacheFactory.CDB:6)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    to com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$ DaemonPool$ WrapperTask.run (Peer.CDB:9)
    to com.tangosol.coherence.component.util.DaemonPool$ WrapperTask.run (DaemonPool.CDB:32)
    to com.tangosol.coherence.component.util.DaemonPool$ Daemon.onNotify (DaemonPool.CDB:69)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:613)


    This is my config file -

    <>cache-config
    < cache-system-mapping >
    <>cache-mapping
    < name of cache - > * < / cache-name >
    < scheme name > - MY-reply-cache-system < / system-name >
    < / cache-mapping >
    < / cache-system-mapping >

    <>- cached patterns


    <!--
    Reproduces the caching scheme.
    ->

    < replicated system >
    < scheme name > - MY-reply-cache-system < / system-name >
    < service name > ReplicatedCache < / service-name >
    < support-map-plan >
    < local plan >
    < / local plan >
    < / support-map-plan >
    Member of < lease-granularity > < / lease-granularity >
    < autostart > true < / autostart >
    < / replicated system >


    < proxy-system >
    < service name > ExtendTcpProxyService < / service-name >
    < number of threads > 5 < / thread count >
    < Acceptor-config >
    <>tcp-Acceptor
    < address - >
    address <>server < / address >
    port < port > < / port >
    < / local-address >
    < receive-buffer-size > 768 k < / receive-buffer-size >
    <-> 768 k send buffer-size < / size of the buffer - send->
    < / tcp-Acceptor >
    < / Acceptor-config >
    < autostart > true < / autostart >
    < / proxy-system >
    < / cache-plans >
    < / cache-config >

    Published by: user1945969 on June 5, 2010 16:16

    By default, it should have used FIXED unit-calculator. But look at the track seems that your replicated cache used BINARY as unit-calculator.

    Could you try add FIXED in your configuration of cache for the cache to replicate.
    Or try just insert an object (the key and value) that implement binary.

    Check the part of the unit calculator on this link

    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Concurrency with replicated Cache entry processors

    Hello

    In the documentation says that entered the replicated cache processors are run on the original node.

    How concurrency is managed in this situation?
    What happens if two nodes or more are invited to run something on an entry at the same time?
    What happens if the node initiating enforcement is a storage node disabled?

    Thank you!

    Jonathan.Knight wrote:
    In a distributed cache entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes, so I think that one of the questions was what happens in this scenario. I presume that the only EP execues on one of the nodes - it would be unwise to run on all nodes - but which should I use? Is there still a concept of owner for a hiding replicated or it is random.

    At this point, I would have coded a quick experiment to prove what is happening, but unfortunately I'm a little busy right now.

    JK

    Hi Jonathan,.

    in the replicated cache, there are always a notion of possession of an entry, in terms of consistency, it is called a lease. It is still owned by the last node to perform a change successful thereon, where change could be has put/remove, but it can also be a lock operation. Granularity of lease is through entrance.

    Practically the lock operation in the code pasted Dimitri serves two purposes. First, it ensures that no other nodes cannot lock, second that it brings the lease to the lock node, so it can properly run the processor entry locally on the entry.

    Best regards

    Robert

  • Is replicated cache single thread?

    Hello

    Replicated cache scheme do has no thread < count > configuration option.
    1 can. the replicated system cause a survey of worker threads?
    2. what thread is used to get replicated, calling cache operations or a cache service event dispatch thread?
    3. where are executed the aggregation of entry (local or remote, calling node or event dispatch thread)?

    Thank you
    Alexey

    Alexey salvation,

    1. replicated cache is not currently using worker threads
    2, 3. 'Read' all operations are performed on the thread of the appellant

    Kind regards
    Gene

  • Need info on replicated cache

    I have a simple request, we have a replicated cache that needs to be updated on a daily basis. If we call the api NamedCache.clear () before adding new items in the cache, the clear() method clears cache corresponding to other nodes in the cluster?


    Thank you
    Shashi

    Yes - NamedCache.clear () will clear your cache completely, regardless of the cache topology.

    As far as your client code is concerned, you can process cache as a map of hash unreleased and consistency will take care of the rest.

    HTH,

    ALEKS

  • Several replicated Cache Services

    I have 5 members in my group. I wish I had 2 caches replied:

    Memory cache 1: reproduced more members (1,2)
    Cache 2: Reproduced more members (1,2,3,4,5)

    Is this possible?
    Thank you.

    Hello

    I missed the part of services for replicated in the configuration system. Here you go:

    Configuration on members 1.2:


              
                   RepL1
                   RepL1               
                   
                                            
                   

                   true
              

              
                   RepL2
                   RepL2               
                   
                        
                   

                   true
              

    Configuration on 3,4,5 members:
    N ' will not cache mapping for lettres2 and Schema element replicated for repl2

    (Application for node) clients should have the knowledge of the repl1 and repl2.

    I hope this helps!

    See you soon,.
    NJ

    Published by: user738616 on April 20, 2012 09:28

  • Error in replicated cache (lazy write) led to the OOM error in live cache.

    We have an application that has a live cache and the data in the live cache is replicated on a cache of contingent. Result cache is implemented by enabling the cache write-back cache direct store.

    The problem is if the cache of the quota goes out of memory, direct cache fails to send data in the cache of contingent and keep requeuing data. Also the cache nodes fails to connect to the cache quota live, they keep on Retry to connect nodes from the cache immediately, this has also led to several exception on direct cache cluster. If the situation persists for cache some time then direct also fate memory and don't fail to answer.


    In this way contingent cache actually creates a direct cache problem. We want to ensure that if there is a problem in the cache now, this should not change the live cache. Any suggestion?

    Can you maybe post the details of configuration for these two caches?

    Maybe size limiting the possible cache will solve your problem?

  • More fields customized in eCommerce, Possible?

    I have a client who needs more custom fields to enter data on its products.

    How is it possible to do?

    It is as it is. You can make a web application by the section and put a module in a custom field for a bit more but there is a management more with it. There are other people with similar proposals saying "Yes it is possible," but it's a management more and not really viable for the client.

    So, really, the answer is no.

  • Custom Zoom steps possible?

    I just got a screen of 4 k. 31.5 inches (140dpi)

    Then when I open a Photoshop of a Web site design and display 100% his watch more little then in my browser that is being enhanced.

    So in photoshop, I have about 150% zoom view them both in the same size.

    But Photoshop is that the 100% to 200% zoom step, using the shortcut CTRL +.

    So I would add 150% as an additional step fixed zoom. Is it possible somehow?

    I know it can be done manually on the lower left each time. But that takes too much time typing every time.

    (Photoshop team, please also add 150% ui maginification instead of 200%, which is too much.)

    Save the code in a text file with the extension .jsx and place the file in the Photoshop subfolder below: Presets\Scripts. Restart Photoshop and you should see the script in the file > scripts menu. Record an action and run this script. You can assign a keyboard shortcut for it.

    #target photoshop
    setZoom (150);
    
    function setZoom( zoom ) {
       cTID = function(s) { return app.charIDToTypeID(s); };
       var docRes = activeDocument.resolution;
       activeDocument.resizeImage( undefined, undefined, 72/(zoom/100), ResampleMethod.NONE );
       var desc = new ActionDescriptor();
       var ref = new ActionReference();
       ref.putEnumerated( cTID( "Mn  " ), cTID( "MnIt" ), cTID( 'PrnS' ) );
       desc.putReference( cTID( "null" ), ref );
       executeAction( cTID( "slct" ), desc, DialogModes.NO );
       activeDocument.resizeImage( undefined, undefined, docRes, ResampleMethod.NONE );
    }
    
  • Custom interactive representatives possible?

    4.2.1

    Hello

    Searched online about this but to get answers from a couple of years back. Is it possible to have an interactive relationship with a dynamic where clause? If the report I would have something like

    Select empid, emp_name of emp

    and then where clause is built separately in a process that contains a pl/sql block that where all conditions, such as
    Begin
    
    if :P1_type = 1 then
    
     :WHERE_VAR := 'where name like 'Jon%';
    else
    
     :Where_var := 'where dept  = 10;
    end if;
    
    end;
    and then the variable is referenced in the interactive report.

    Is this possible?


    Thank you
    Ryan

    ryansun wrote:
    4.2.1

    Hello

    Searched online about this but to get answers from a couple of years back. Is it possible to have an interactive relationship with a dynamic where clause? If the report I would have something like

    Select empid, emp_name of emp

    and then where clause is built separately in a process that contains a pl/sql block that where all conditions, such as

    Begin
    
    if :P1_type = 1 then
    
    :WHERE_VAR := 'where name like 'Jon%';
    else
    
    :Where_var := 'where dept  = 10;
    end if;
    
    end;
    

    and then the variable is referenced in the interactive report.

    Is this possible?

    Lol there are different reasons related to the IR characteristics which mean that changes to the projection of report or the results outside the IR break the current IR view or saved IR reports. This means that the application of the IR must remain fixed.

    You have probably already found options for creating IRs on sources of dynamic data - unions, collections of APEX, views set, functions in pipeline. DPV may also be an option when you use a database of EE.

    For a simple example like the one above, use a union query with predicates that will cause unnecessary subqueries to remove:

    select empid, emp_name
    from  emp
    where name like 'Jon%'
    and   :p1_type = 1
    union all
    select empid, emp_name
    from  emp
    where dept = 10
    and   (:p1_type != 1 or :p1_type is null)
    

    (Although these predicates looks a lot more like something that would be used in an IR filter rather than a dynamic data source. Note that you can create dynamic links and the branches IR filters).

  • graphic objects custom-is it possible?

    The task is to create my own map in illustrator which will interact with spreadsheets similar to existing graphics from Illustrator.  Is this possible?  Looks like an excersise script to set basic objects and layout settings, then one of these parameters (in this case a diameter) a link to an excel file prepared with measures unique-number of pixels.  A key element is that the table should be updated in the database is changed.

    Any ideas?

    Isn't fun script. It was a breeze (well - sort) to get something running for InDesign & top... Now translate into very different Javascript for Illustrator has proved more difficult than I thought!

    (There no way to align text vertically within its framework? It is now-of the sort, but it depends on the number of lines of text...)

    Type (or copy or import) your values in a new block of text. Separate the label and the value in a single tab. The values will be sorted automatically, and the greatest value will appear in the middle. the rest will be distributed nicely around the edge. I noticed my mathematical distribution isn't exactly the same as the sample you provided... even when you allow for a pixel here or there. I wonder why?

    For better or for worse, this is what it produces - the first tiebreaker will always be at an angle of 45 degrees, the rest is separated from each other by an exact distance of 5 points.

    and here's the script:

    //DESCRIPTION:Krazy Circular Diagrams
    // A Jongware Script 24-Sep-2010
    
    // Uses a tab-separated set of String / value data
    // which ought to be selected when running the script.
    // Use but do not abuse, please.
    
    if (app.documents.length == 0 || app.selection.length != 1 || !(app.selection[0].hasOwnProperty("baseline") || app.selection[0].hasOwnProperty("contents")))
    {
         alert ("Please select the text frame containing data");
    } else
    {
         var dataArray;
         var resultGroup;
    
         var parent_diameter = 100;
         var parent_position_x = app.activeDocument.activeView.centerPoint[0];
         var parent_position_y = app.activeDocument.activeView.centerPoint[1];
    
         var black = new GrayColor(); black.gray = 100;
         var white = new GrayColor(); white.gray = 0;
    
         dataArray = gatherValues(app.selection[0]);
         if (dataArray.length == 0)
              alert ("Unable to get sensible values .. please check");
         else
         {
              calculateValues ();
              resultGroup = app.activeDocument.groupItems.add();
              drawCircles(resultGroup);
         }
    }
    
    function gatherValues (fromItem)
    {
         var result = new Array();
         var l, line, lines, dataSource;
    
         if (fromItem.hasOwnProperty("baseline"))
              dataSource = fromItem.parentStory.contents;
         else
              dataSource = fromItem.contents;
    
         lines = dataSource.split ("\r");
    
         for (l=0; l position of center of B (anywhere)
              // oh, and distance from parent_center to child_center B is the same as to child_center A!
              // checking on Wikipedia, http://en.wikipedia.org/wiki/Law_of_sines
              // yields something like this ...
    
                   angle_diff = Math.asin ( ((dataArray[nextCircle+1][2]/2 + 5 + dataArray[nextCircle][2]/2)/2) / centerDistance);     // in Radians
                   angle_diff = 2*angle_diff * 180 / Math.PI;     // in Degrees
                   circleAngle = circleAngle - angle_diff;
              }
         }
    }
    
    function drawCircleAt (group, xpos, ypos, textAndSize, color, lineto_x, lineto_y, parentRad, angle)
    {
         var tframe, line;
    
    //     The connexion line, if any
         if (lineto_x != undefined && lineto_y != undefined && parentRad != undefined)
         {
              angle = Math.PI * angle / 180.0;
    
              line = group.pathItems.add();
              line.setEntirePath ( [[lineto_x + parentRad*Math.cos(angle),lineto_y + parentRad*Math.sin(angle)], [xpos - textAndSize[2]/2*Math.cos(angle),ypos - textAndSize[2]/2*Math.sin(angle)]]);
              line.filled = false;
              line.stroked = true;
              line.strokeWidth = 0.5;
              line.strokeColor = black;
         }
    
    //     The Circle
         circle = group.pathItems.ellipse (ypos+textAndSize[2]/2, xpos-textAndSize[2]/2, textAndSize[2], textAndSize[2]);
         circle.strokeColor = black;
         circle.strokeWidth = 0.5;
         circle.fillColor = color;
    
    //     The Text Frame
         tframe = app.activeDocument.pathItems.rectangle(ypos+textAndSize[2]/6, xpos-textAndSize[2]/2, textAndSize[2], textAndSize[2]/2);
         tframe = group.textFrames.areaText(tframe);
         tframe.contents = textAndSize[0];
         tframe.wrapInside = false;
         tframe.wrapped = false;
         tframe.textRange.hyphenation = false;
         tframe.textRange.justification = Justification.CENTER;
         tframe.textRange.textFont = app.textFonts.getByName("MyriadPro-Regular");
         tframe.textRange.characterAttributes.size = textAndSize[2]/6;
         tframe.textRange.fillColor = white;
    
    }
    
    function sortByValue (a,b)
    {
         a = Number(a[1]);
         b = Number(b[1]);
         return (a == b) ? 0 : (a < b ? 1 : -1);
    }
    
  • Legends custom in "cache".

    When I create a new, custom legend and access it via the new legend legend text property box Type falling down, it seems to become a permanent part of this area of the property. Even if I delete the source files of the Gallery/legends, the reference remains the descent down. As a result, I have several versions of my tests early.

    I understood how to avoid adding captions that I do not use (closing the file without saving it is not capture the image in the drop-down menu), but want to clean up the list. I have to scroll through several "additions" to join the live list. Is there a way to enter in the 'memory' create this insertion to remove?

    Thank you!
    .. .Karen

    Jinx importation has not been around.

    Well, at least we learned something new and you are able to move forward. Good luck!

Maybe you are looking for

  • El capitan is supported on iMac late 2009

    simple question really - I'm sure that the answer there, I could not find in the maze.

  • Satellite A80/A60 speakers is distorted

    HelloThe sounds from my computer is very distorted. The best way I can describe it is "mechanical screeching." It only happens with some sounds much like the sounds of windows (for example, starting and changing it its volume) and video playback. I t

  • How can I find the schedule for 1921

    I would like to find the calendar years from 1900 to today for information for some families of births and deaths to know the days of the week when they get thank you. Henry.

  • Could turn off the light of State Aironet 1130?

    We had an Aironet must installed on darkroom but its status LED continues to blink. We could turn off the LED set up? As I knew that this access point can be set continuously flashing the LED State.

  • Cisco NAC device filtering question

    Cisco NAC provides support for devices such as printers, IP phones, UPS failure, etc. by adding them to a list of filters. This allows these devices to bypass basically the NAC system and is only available on the network. My question is this. If exem