2 Clusterware on the same node

Hey everyone in this forum,

I want to ask you: is posible install 2 clusterwares (Infrastructure Grid) on the same machine. We have 2 ORA_GRID_HOME in the saeme machine to have 2 totally cross-cutting solutions.

I thank very you much for your time.

user12215372 wrote:
Hey everyone in this forum,

I want to ask you: is posible install 2 clusterwares (Infrastructure Grid) on the same machine. We have 2 ORA_GRID_HOME in the saeme machine to have 2 totally cross-cutting solutions.

In short, the answer is no.

Can install 2 Clusterware software on the same host, but you can have a Clusterware configured/active once.

Why you want 2 Cluster on the same host? I've never seen a host configured with two configured cluster.

The cluster is a set of working as a single host, then the software cluster manage and control hosts, if you have 2 cluster on same host two software cluster will have concurrency disputing who manages the host and this must not happen.

Levi Pereira

Published by: Levi Pereira on May 4, 2012 21:54

Tags: Database

Similar Questions

  • Use the same node of IO in sbRIO (VxWorks) 9606 and 9607 (Linux)

    Hello!

    I have one tries to use the same FPGA VI on both a sbRIO 9612, 9605, 9606 and 9607. For the three first it is perfectly feasible as soon as I call the IOs even name to different boards. For the 9607 I can't make it work.

    A knot of IO on 9607 looks like:

    And I can't the closest to a 9606 is:

    Because I'm not allowed to name it with backslash.

    And it does not work on both targets.

    Is there any other way to the insertion of a structure conditional disable for each node of e/s I use?

    Thank you

    Anders

    Hi Anders,

    I noticed in your example, you created a CLIP in the interface with your e/s under the 9607. Did you plan using VIDEO-specific features? If not, you can simply add an EGGA as in the 9606 (target FPGA 9607 right-click > New > RIO Mezzanine Card) and the e/s should look like. This can give you more parity between the targets. Otherwise, I think that the two interfaces will have different properties. I modified your sample project and attached. Take a look at "IO2.vi" and let us know if that's what you're looking for.

  • information processing in several caches in the same node

    We use a partitioned cache to load the data (several types of data) into multiple named caches. In a single partition, we plan to have all the related data, and we have obtained using this key binding.

    Now, I want to do the treatment with this node and a reconciliation of data from various sources. We tried the entry processor, but we want to consolidate all the data of multiple caches named in the node. Form a very naïve, I think to each cache named tabular and I'm looking for ways to have a processor that will do some processing on the associated data.

    I see that we can use a combination of remained object, processors of service and entry Invocation, but I am unable to implement successfully.

    Can you please point me to any implementation of reference where I can perform some processing on the data node without data transfer to the customer.

    Also to reduce any reference implementation of the card (on the server side) in coherence would be useful.

    Concerning
    Gaudin

    Gaudin

    The same concept applies to batch processing. You can run an entry processor who starts a background thread for batch processing. The only drawback is that you must manage failover explicitly to restart the thread on a new Member if Member fails. This can be done by using a lease in the package of common incubator common.lease.

    Paul

  • How to configure the operation of asynchronous service with local local routing for the sending and the receipt of the request on the local node by default?

    I am facing a problem in the configuration of customer service operation which is asynchronous and has a local-to-local routing. When you try to configure that, it automatically assumes that it is an outgoing request.

    However, I want to treat it as an incoming query I want to send and receive the application on the same node.

    Can someone let me know steps?

    For an example, take a look at a tutorial I wrote on the Integration Broker.

    You can now download the tutorial for free including some source files:

    Broker integration for PeopleSoft Developer databases. Blogging about Oracle Applications

    Especially take a look at Chapter 5, where I describe the steps you must take to create a custom incoming service call this service of the PeopleCode. If you send a received message to PeopleSoft.

    The tutorial uses a synchronous service operation. Asynchronous replacement, the Manager of service for OnRequest to OnNotify.

    I hope this helps.

    Kind regards

    Halin

  • FMS 3.5 and Adobe Connect Pro on the same machine?

    Is it possible to install Flash Media Server 3.5 on the same Windows machine where Adobe Connect Pro is installed?

    If so, how?

    When I try to, it says there is already a distance of FMS running and he asks to uninstall.

    Of course, there is not any FMS installed. I want both on the machine.

    I suspect that Adobe Acrobat Connect Pro uses a FMS engine to work properly.

    Anyone know about this?

    Thank you

    Israel

    No, you can't. At the time the same pattern did not work with Breeze and FCS on the same node. I suggest you go with vmware to separate nodes. However, I would not recommend mix the two on the same physical machine as the two methods require everything a bit of resources.

  • Clonning an ASM database on the same server with different names of sid.

    Hello people!

    I tried to clone an existing database on the same host with a different name with no luck.

    I tried what 415579,1 reccomends, but, when I reached the RESTORE CONTROLFILE from clause that it fails since this process the old and the new database must be the same. I don't want that. I want a separate copy, not a database of pending. There are other people in the company who will use this database, and I just want to replicate it.

    RMAN-00571: ===========================================================
    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.
    RMAN-00571: ===========================================================
    RMAN-03002: failure to modify the order db at 26/05/2009 14:51:15
    ORA-01103: the name "OLD_DB_NAME" in the control of the database file is not "NEW_DB_NAME".

    I want to just clone a database with a different name, in the same way we used to do with ALTER DATBASE BACKUP CONTROLFILE to TRACE-> COPY DB_FILES-> RECREATING CONTROLFILES WITH a DIFFERENT NAME DATABASE.

    Exp/imp it's a possible solution, but I wonder if it is just another way to clone an existing database in the same node with a different namedo with RMAN? Just as we used to with BACKUP CONTROLFILE to TRACE-> DBF_FILES-> RECREATING CONTROLFILES WITH DB NAME DIFFERENT COPY.

    It's the links I tried to follow without success.

    http://www.Oracle.com/technology/deploy/availability/PDF/asm_cloning.PDF
    Duplicate by RMAN 10 g

    and many others...

    Thanks for your help.
    Alex.

    You'll need an auxiliary copy if you restore the database to a different name.
    Take a look at the following note:
    http://jhdba.WordPress.com/2009/04/02/cloning-a-database-ASM-to-ASM/

  • Keep reading - in the order of the sibling nodes

    Berkeley DB XML preserves read-in the order of the sibling nodes after reading in the XML document?

    In other words does XPath expression: / node1/node2 [2] / node3 always return the same node
    and doesn't depend on the storage format of data (XML file, container in Berkeley DB XML)?

    TIA,
    Maciej

    The order of the document is always kept in the documents XML no matter where they are stored. Your query will return the same node regardless of the location of the source document (assuming it's still the same document).

    Lauren Foutz

  • two nodes ASM has the same instance name

    Hello

    I created a grid of ASM of two nodes, when the grid facility (11.2.0.1) finish he created the + ASM1 as instance name for my two knots.

    I have install the ORACLE_SID in + ASM1 and + ASM2 node1 and node2, respectively prior to installation.

    What is wrong, I do it for the installation of the grid? Please suggest.

    Thank you

    To create a virtual environment of RAC in VirtualBox or to share devices in an environment of ASM, you should turn off the virtual machine and change devices in VirtualBox using the following command:

    $ VBoxManage modifyhd asmdisk1.vdi - shareable type

    Then you can add the same features and use the disgroups on different nodes of the DSO. BTW, this is rather a question of the use of VirtualBox for which there is a special forum.

  • Table tree - Select All child nodes at the same time when the Parent node is selected.

    Hi all

    I am relatively new to ADF and JDeveloper, and so I hit a problem I can't deny.

    I have a tree table that looks like the following...

    1 status | Name | Employee ID

    -> Status 2. Name | Employee ID | etc. | etc.

    -> 3 status. Name | Employee ID | etc. | etc. | etc.

    I want to do is when I select the node from top of the line/parent (line 1) page I would like child nodes (lines 2 and 3) to choose from in the same mouse clicks.

    In the end I will be citing a listener of the property on a button, once the selection was made, to change the value of the 'Status' column in all three levels that have been selected.

    NB. each level in the tree above is derived from 3 views distinct, who are joined on the employee ID.

    Thanks in advance for your help and your advice.

    Kind regards

    Jamie.

    Jamie, tell us your version of jdev, please!

    This http://andrejusb.blogspot.de/2012/11/check-box-support-in-adf-tree-table.html Blog shows how to implement this.

    Timo

  • How to set of Cluster nodes on the same network but on different gateways

    Hello
    We want to set of Cluster nodes on the same network but the connection through different gateways for Unicast (WKA). We strive to add sockets for < cluster-config > tangosol-coherence - override.xml file, pls help how to set.

    Looks like you are using the default configuration for the system property:

    8088
    true

    The probable reason for this question, it's the localport is unaffected as 8088 for communication on the machines. Either use the property system - Dtangosol.coherence.localport = 8088 and - Dtangosol.coherence.localport.adjust = false or see the node will connect and find node assigned to each node and open the nodes; recommended to use multiple addresses node as in unicast configuration below:

    well-known-address >


    XXX

    8088


    YYY

    8088

    See you soon,.
    Neeraj

  • Can I create different consistency nodes in the same cluster with defferent?

    Can I create consistency different nodes in the same cluster with defferent cache - config.xml file?
    A cache can be distributed in these nodes of deffirent?

    Yes. You can create different consistency nodes in the same cluster with cache files - config.xml defferent as long as you use the same file tangosol - coherence.xml and the same file tangosol-coherence - override.xml. But you can't store the data in the cache in the various nodes (created with different cache-config file). In other words, a node only to create their own Hide modes that are launched with the same cache file - config.Xml.

    See the following demo:
    I start a cache using the cache config examples-cache file server - server.xml. Then I start a console cache storage-disabled people (client cache) by using the config file coherence-cache cache - config.Xml. Two of them using the same file tangosol - coherence.xml and the same file tangosol-coherence - override.xml.
    The cache server uses a cache service PartitionedPofCache. But the client side using the Distributedcache service. The cluster address is same 224.3.5.2.
    The name of the cluster is also samme. They know.

    D:\coherence\lib>D:\examples\java\bin\run-cache-server.cmd

    D:\coherence\lib>D:\examples\java\bin\run-cache-server.cmd
    The system cannot find the file D:\coherence.
    The system cannot find the file C:\Oracle\Middleware\jdk160_11.
    2009-12-22 12:09:31.400/4.987 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Loaded operational configurat
    ion from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence.xml"
    2009-12-22 12:09:31.450/5.037 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Loaded operational overrides
    from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
    2009-12-22 12:09:31.470/5.057 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Optional configuration override
     "/tangosol-coherence-override.xml" is not specified
    2009-12-22 12:09:31.540/5.127 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Optional configuration override
     "/custom-mbeans.xml" is not specified
    
    Oracle Coherence Version 3.5.2/463
     Grid Edition: Development mode
    Copyright (c) 2000, 2009, Oracle and/or its affiliates. All rights reserved.
    
    2009-12-22 12:09:33.864/7.451 Oracle Coherence GE 3.5.2/463  (thread=main, member=n/a): Loaded cache configuration
     from "file:/D:/examples/java/resource/config/examples-cache-config.xml"
    2009-12-22 12:09:39.983/13.570 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Service Cluster joined t
    he cluster with senior service member n/a
    2009-12-22 12:09:43.187/16.774 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Created a new cluster
    "cluster:0xD3FB" with Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Locatio
    n=process:144, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=1, SocketCount=1) UID=0xC0A8085000
    000125B75D888C60501F98
    2009-12-22 12:09:43.508/17.095 Oracle Coherence GE 3.5.2/463  (thread=Invocation:Management, member=1): Service Mana
    gement joined the cluster with senior service member 1
    2009-12-22 12:09:46.582/20.169 Oracle Coherence GE 3.5.2/463  (thread=DistributedCache:PartitionedPofCache, member=1
    ): Service PartitionedPofCache joined the cluster with senior service member 1
    2009-12-22 12:09:46.672/20.259 Oracle Coherence GE 3.5.2/463  (thread=DistributedCache:PartitionedPofCache, member
    =1): Loading POF configuration from resource "file:/D:/examples/java/resource/config/examples-pof-config.xml"
    2009-12-22 12:09:46.702/20.289 Oracle Coherence GE 3.5.2/463  (thread=DistributedCache:PartitionedPofCache, member
    =1): Loading POF configuration from resource "jar:file:/D:/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2009-12-22 12:09:47.734/21.321 Oracle Coherence GE 3.5.2/463  (thread=main, member=1): Started DefaultCacheServer.
    ..
    
    SafeCluster: Name=cluster:0xD3FB
    
    Group{Address=224.3.5.2, Port=35463, TTL=4}
    
    MasterMemberSet
      (
      ThisMember=Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process
    :144, Role=CoherenceServer)
      OldestMember=Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=proce
    ss:144, Role=CoherenceServer)
      ActualMemberSet=MemberSet(Size=1, BitSetCount=2
        Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process:144, Rol
    e=CoherenceServer)
        )
      RecycleMillis=120000
      RecycleSet=MemberSet(Size=0, BitSetCount=0
        )
      )
    
    Services
      (
      TcpRing{TcpSocketAccepter{State=STATE_OPEN, ServerSocket=192.168.8.80:8088}, Connections=[]}
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.5, OldestMemberId=1}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
      DistributedCache{Name=PartitionedPofCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCo
    unt=1, AssignedPartitions=257, BackupPartitions=0}
      )
    
    2009-12-22 12:12:29.737/183.324 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=1): Member(Id=2, Timestamp=20
    09-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=process:1188, Role=CoherenceConsole) joined
    Cluster with senior member 1
    2009-12-22 12:12:30.498/184.085 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=1): Member 2 joined Service M
    anagement with senior member 1
    2009-12-22 12:12:31.860/185.447 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=1): TcpRing: connecting to me
    mber 2 using TcpSocket{State=STATE_OPEN, Socket=Socket[addr=/192.168.8.80,port=8089,localport=2463]}
    2009-12-22 12:12:51.338/204.925 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=1): Member 2 joined Service D
    istributedCache with senior member 2
    

    The following command starts a cache client.
    D:\coherence\bin>coherence.cmd

    D:\coherence\bin>coherence.cmd
    ** Starting storage disabled console **
    java version "1.6.0_11"
    Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    Java HotSpot(TM) Server VM (build 11.0-b16, mixed mode)
    
    2009-12-22 12:12:21.054/3.425 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Loaded operational configurat
    ion from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence.xml"
    2009-12-22 12:12:21.355/3.726 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Loaded operational overrides
    from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
    2009-12-22 12:12:21.365/3.736 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Optional configuration override
     "/tangosol-coherence-override.xml" is not specified
    2009-12-22 12:12:21.415/3.786 Oracle Coherence 3.5.2/463  (thread=main, member=n/a): Optional configuration override
     "/custom-mbeans.xml" is not specified
    
    Oracle Coherence Version 3.5.2/463
     Grid Edition: Development mode
    Copyright (c) 2000, 2009, Oracle and/or its affiliates. All rights reserved.
    
    2009-12-22 12:12:29.316/11.687 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Service Cluster joined t
    he cluster with senior service member n/a
    2009-12-22 12:12:29.356/11.727 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Failed to satisfy the
    variance: allowed=16, actual=20
    2009-12-22 12:12:29.356/11.727 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Increasing allowable v
    ariance to 17
    2009-12-22 12:12:29.807/12.178 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): This Member(Id=2, Time
    stamp=2009-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=process:1188, Role=CoherenceConsole,
     Edition=Grid Edition, Mode=Development, CpuCount=1, SocketCount=1) joined cluster "cluster:0xD3FB" with senior Member(I
    d=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process:144, Role=CoherenceS
    erver, Edition=Grid Edition, Mode=Development, CpuCount=1, SocketCount=1)
    2009-12-22 12:12:29.977/12.348 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Member 1 joined Service
    Management with senior member 1
    2009-12-22 12:12:29.977/12.348 Oracle Coherence GE 3.5.2/463  (thread=Cluster, member=n/a): Member 1 joined Service
    PartitionedPofCache with senior member 1
    2009-12-22 12:12:30.578/12.949 Oracle Coherence GE 3.5.2/463  (thread=Invocation:Management, member=2): Service Mana
    gement joined the cluster with senior service member 1
    SafeCluster: Name=cluster:0xD3FB
    
    Group{Address=224.3.5.2, Port=35463, TTL=4}
    
    MasterMemberSet
      (
      ThisMember=Member(Id=2, Timestamp=2009-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=proces
    s:1188, Role=CoherenceConsole)
      OldestMember=Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=proce
    ss:144, Role=CoherenceServer)
      ActualMemberSet=MemberSet(Size=2, BitSetCount=2
        Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process:144, Rol
    e=CoherenceServer)
        Member(Id=2, Timestamp=2009-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=process:1188, R
    ole=CoherenceConsole)
        )
      RecycleMillis=120000
      RecycleSet=MemberSet(Size=0, BitSetCount=0
        )
      )
    
    Services
      (
      TcpRing{TcpSocketAccepter{State=STATE_OPEN, ServerSocket=192.168.8.80:8089}, Connections=[]}
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.5, OldestMemberId=1}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
      )
    
    Map (?):
    
    2009-12-22 12:12:49.505/31.906 Oracle Coherence GE 3.5.2/463  (thread=main, member=2): Loaded cache configuration
    from "jar:file:/D:/coherence/lib/coherence.jar!/coherence-cache-config.xml"
    2009-12-22 12:12:51.358/33.729 Oracle Coherence GE 3.5.2/463  (thread=DistributedCache, member=2): Service Distribut
    edCache joined the cluster with senior service member 2
    
      
      example-distributed
      DistributedCache
      
        
          example-binary-backing-map
        
      
      true
    
    

    But when I try to store data cached from the client side, it reports the error message: it is staorage-disabled people. It shows that this console cache cannot store data in the server cache existing since then using different cache config files.

    Map (ca3): cache ca2
    
      
      example-distributed
      DistributedCache
      
        
          example-binary-backing-map
        
      
      true
    
    
    Map (ca2): put 1 one
    2009-12-22 14:00:04.999/6467.370 Oracle Coherence GE 3.5.2/463  (thread=main, member=2):
    java.lang.RuntimeException: Storage is not configured
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.onMissing
    Storage(DistributedCache.CDB:9)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureReq
    uestTarget(DistributedCache.CDB:34)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(Distr
    ibutedCache.CDB:22)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(Distr
    ibutedCache.CDB:1)
            at com.tangosol.util.ConverterCollections$ConverterMap.put(ConverterCollections.java:1541)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(Distrib
    utedCache.CDB:1)
            at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
            at com.tangosol.coherence.component.application.console.Coherence.processCommand(Coherence.CDB:581)
            at com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:39)
            at com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at com.tangosol.net.CacheFactory.main(CacheFactory.java:1400)
    
  • Management node for two areas located on the same server

    I need help to understand how to set up management node for two areas on the same servers. My configuration is:

    Server A:
    AdminServer to area D1
    ManagedServer1 area D1
    ManagedServer2 domain D2
    ServerB:
    AdminServer for domain D2
    ManagedServer1 domain D2
    ManagedServer2 area D1

    Just to clarify, D1 and D2 are dev and qa environment. WebLogic configurations and domain are all in different and completely separate folders.

    I started a manager based on java node on each server. Managers node configs are in different folders as well. Something is messed up, because it does not work properly. For example when I start nodes on server B Manager, he kills AdminServer for D2.

    Can I have that one manager of the nodes in this configuration, and the problem is in the config somewhere?
    I should have a management node on each server for each domain (two per server node managers?)

    Sorry if this is a basic trick, I am newbie in the weblogic configurations.

    Thank you
    Oleg

    Not sure why you have several domains. A server administrator can manage multiple clusters in a single domain. A domain is just a group admin servers and clusters. You could do something like:

    Server A:
    AdminServer domain D1 with 2 groups C1, C2
    ManagedServer1 cluster C1
    ManagedServer2 cluster C2
    ServerB:
    ManagedServer1 cluster C2
    ManagedServer2 cluster C1

    that would allow you to use only 1 nodemanager by server

    With your current setup, make sure that each of your domains using a Nodemanager port separate from each other. The default value is 5556, so if two of your domains use the same port NM, there will be a problem.

    For example, with your domain name:

    Server A:
    AdminServer to area D1
    ManagedServer1 domain D1 - use port 5556 NM
    ManagedServer2 domain D2 - use NM port 5557
    ServerB:
    AdminServer for domain D2
    ManagedServer1 domain D2 - use NM port 5557
    ManagedServer2 domain D1 - use port 5556 NM

    You will need set the nodemanager.properties and the configuration of the Machine in BOTH appropriate for each administration console. You should be able to run multiple nodemanagers on different ports.

  • Read the nodes that have the same value as the subnodes - XML

    It is more of a general JAVA / XML problem, but given that it is going into my BlackBerry app I thought I'd see if anyone knows.

    Consider a simple XML document:

                        Whatever 1                   Whatever 2               Whatever 3       
    

    Using the standard org.w3c.dom, I can get the nodes in X by the practice...

    NodeList fullnodelist = doc.getElementsByTagName ("x");

    But if I want to go to the next set of 'e', I try to use something like...

    Element element = (Element) fullnodelist.item(0);NodeList nodes = pelement.getElementsByTagName("e");
    

    EXPECTED back '3' nodes (because there are 3 series of 'e'), but it returns '9' - because it gets all entries including the 'e' apperently.

    It would be nice in the above case, because I could probably go through and find what I'm looking for. The problem I have is that when the XML file looks like the following:

                whatever              Something Else                    whatever              Something Else            
    

    When I ask 'e' value, it returns 4, instead of (what I want) 2.

    I am simply not understand how DOM parsing works? Generally, in the past I used my own XML documents so I name never articles like this, but unfortunately this isn't my XML file and I don't have the choice to work like this.

    What I thought I would do, it is write a loop knots "drills down" so that I can combine each node...

      public static NodeList getNodeList(Element pelement, String find)
        {
            String[] nodesfind = Utilities.Split(find, "/");
            NodeList nodeList = null;
    
            for (int i = 0 ; i <= nodesfind.length - 1; i++ )
            {
                nodeList = pelement.getElementsByTagName( nodesfind[i] );
                pelement = (Element)nodeList.item(i);
            }
    
            // value of the nod we are looking for
            return nodeList;
        }
    

    .. While if adopted you ' s/e' in the service, he would return the 2 nodes I'm looking (or elements, perhaps I'm using the wrong terminology?). on the contrary, it returns all the 'e' nodes in this node.

    Anyway, if anyone is still with me and has a suggestion, it would be appreciated.

    Well, there is no doubt that there is a learning curve robust for XML programming. You can take an hour or two and go through one of the tutorials that are circulating on the net. (Like that of w3schools.com.)

    Basically, almost everything in XML is a node, the Document that returns the parser. The API for node tells you that you can test the node type you have by calling getNodeType, which returns one of the constants of type node (Node.ELEMENT_NODE, Node.TEXT_NODE, etc..) If necessary, you can then convert the variable to the corresponding interface (element, text, etc.).

    Similarly, the API documentation say you for any node, calling getChildNodes (or for an element node or Document getElementsByTagName) will give you a NodeList (a little non-types of nodes in the XML API), while calling getFirstChild and getNextSibling to any node will give you another node (or null ).

    Once you learn the API, writing logic of course is not all that hard. For example, if the only 'e' interest tags are those directly under the element root of the document (as shown in your example) you can simply go to them directly:

    Vector getTopENodes(Document doc) {  Vector vec = new Vector();  NodeList nodes = doc.getDocumentElement().getChildNodes();  int n = nodes.getLength();  for (int i = 0; i < n; ++i) {    Node node = nodes.item(i);    if (node.getNodeType() == Node.ELEMENT_NODE &&        "e".equals(node.getNodeName()))    {      vec.addElement(node);    }  }  return vec;}
    

    Note that this example does not assume that all children are nodes of element 'e '. the document could have comments, white space or something else that makes it into the DOM as comment, text or any other type of node.

    On the other hand, if you want to capture every "e" tag which is directly under the ' tag, no matter the level, then you need to do something a little more complicated (it's on the top of my head - no guarantee):

    static class NodeListImp implements NodeList {  private Vector nodes = new Vector();  public int getLength() {    return nodes.size();  }  public Node item(int index) {    return (Node) nodes.elementAt(index);  }  public add(Node node) {    nodes.addElement(node);  }}
    
    NodeList getTargetNodes(Document doc) {  NodeListImp list = new NodeListImp();  getTargetnodes(list, doc.getDocumentElement(), false);  return list;}
    
    void getTargetNodes(NodeListImp list, Node node, boolean parentIsS) {  if (node.getNodeType() == Node.ELEMENT_NODE) {    // node name is tag name for element nodes    String name = node.getNodeName();    if (parentIsS && "e".equals(name)) {      list.add(node);    }    parentIsS = "s".equals(name);    for (Node child = node.getFirstChild();         child != null;         child = child.getNextSibling())    {      getTargetNodes(list, child, parentIsS);    }  }}
    

    I hope that it gets the idea across.

  • Add multiple nodes of the same name from one xml into another

    Hello

    Following my question the other day (adding several nodes of different one xmltype in the other), I now need a bit more complex that I can't work on where to begin, assuming that it is something that can reuse some/all the quote from yesterday (thank you once again, odie_63!). ETA: I'm on 11.2.0.3

    So, here's the (slightly modified) xml with the solution of yesterday:

    with sample_data as (select xmltype ("< root >

    < xmlnode >

    val1 < subnode1 > < / subnode1 >

    val2 < subnode2 > < / subnode2 >

    < / xmlnode >

    < xmlnode >

    val3 < subnode1 > < / subnode1 >

    < subnode2 > val4 < / subnode2 >

    < / xmlnode >

    < / root >') xml_to_update,.

    XmlType ("< a >

    < b > < /b > valb

    valc < c >/< c >

    < d >

    vald1 < d1 > < / d1 >

    < d2 > vald2 < / d2 >

    / < d: >

    Vale of < e > < /e >

    valf < f > < /f >

    < g >

    valg1 < g1 > < / g1 >

    valg2 < g2 > < / g2 >

    / < g >

    < h >

    valh1 < h1 > < / h1 >

    valh2 < h2 > < / h2 >

    < HR >

    < volume >

    < name > fred < / name >

    < type > book < / type >

    Head of <>1 < / head >

    < / multi-user >

    < volume >

    Bob < name > < / name >

    car of < type > < / type >

    < head > 0 < / head >

    < / multi-user >

    ( < /a >') xml_to_extract_from

    the double)

    Select xmlserialize (document

    XMLQUERY)

    ' copy $d: = $old

    Edit)

    Insert the node element extrainfo {}

    $ new/a/b

    , $ new/a.

    , $ new/e/f

    , $/ a/h new

    } as the first in $d / root

    )

    return $from

    by passing sd.xml_to_update as "old."

    , sd.xml_to_extract_from as 'new '.

    contents of return

    )

    dash

    )

    of sample_data sd;

    It gives me:

    < root >

    < InfosSuppl >

    < b > < /b > valb

    < d >

    vald1 < d1 > < / d1 >

    < d2 > vald2 < / d2 >

    / < d: >

    valf < f > < /f >

    < h >

    valh1 < h1 > < / h1 >

    valh2 < h2 > < / h2 >

    < HR >

    < / extrainfo >

    < xmlnode >

    val1 < subnode1 > < / subnode1 >

    val2 < subnode2 > < / subnode2 >

    < / xmlnode >

    < xmlnode >

    val3 < subnode1 > < / subnode1 >

    < subnode2 > val4 < / subnode2 >

    < / xmlnode >

    < / root >

    However, I now need to add new nodes according to the information provided by the < volume > something like nodes in a set:

    < root >

    < InfosSuppl >

    < b > < /b > valb

    < d >

    vald1 < d1 > < / d1 >

    < d2 > vald2 < / d2 >

    / < d: >

    valf < f > < /f >

    < h >

    valh1 < h1 > < / h1 >

    valh2 < h2 > < / h2 >

    < HR >

    < newnode >

    < name > fred < / name >

    < type > book < / type >

    < / newnode >

    < newnode >

    < name > bob < / name >

    car of < type > < / type >

    < / newnode >

    < / extrainfo >

    < xmlnode >

    val1 < subnode1 > < / subnode1 >

    val2 < subnode2 > < / subnode2 >

    < type > book < / type >

    < / xmlnode >

    < xmlnode >

    val3 < subnode1 > < / subnode1 >

    < subnode2 > val4 < / subnode2 >

    car of < type > < / type >

    < / xmlnode >

    < / root >

    If it's easier, I * think * we would be ok with something like:

    ...

    < newnode >

    < name type = 'fred' > book < / type >

    car < name type = 'bob' > < / type >

    < / newnode >

    ...

    Is the closest, I came:

    with sample_data as (select xmltype ("< root >

    < xmlnode >

    val1 < subnode1 > < / subnode1 >

    val2 < subnode2 > < / subnode2 >

    < / xmlnode >

    < xmlnode >

    val3 < subnode1 > < / subnode1 >

    < subnode2 > val4 < / subnode2 >

    < / xmlnode >

    (< / root > ') xml_to_update,.

    XmlType ("< a >

    < b > < /b > valb

    valc < c >/< c >

    < d >

    vald1 < d1 > < / d1 >

    < d2 > vald2 < / d2 >

    / < d: >

    Vale of < e > < /e >

    valf < f > < /f >

    < g >

    valg1 < g1 > < / g1 >

    valg2 < g2 > < / g2 >

    / < g >

    < h >

    valh1 < h1 > < / h1 >

    valh2 < h2 > < / h2 >

    < HR >

    < volume >

    < name > fred < / name >

    < type > book < / type >

    Head of <>1 < / head >

    < / multi-user >

    < volume >

    Bob < name > < / name >

    car of < type > < / type >

    < head > 0 < / head >

    < / multi-user >

    (< /a > ') xml_to_extract_from

    the double)

    Select xmlserialize (document

    XMLQUERY)

    ' copy $d: = $old

    Edit)

    Insert the node element extrainfo {}

    $ new/a/b

    , $ new/a.

    , $ new/e/f

    , $/ a/h new

    newnode}

    $ new/a/multi-user/name

    , $/ a/multi-user/new

    }

    } as the first in $d / root

    )

    return $from

    by passing sd.xml_to_update as "old."

    , sd.xml_to_extract_from as 'new '.

    contents of return

    )

    dash

    ) fred

    of sample_data sd;

    Which produces:

    ...

    < newnode >

    < name > fred < / name >

    Bob < name > < / name >

    < type > book < / type >

    car of < type > < / type >

    < / newnode >

    ...

    -obviously not right!

    Can anyone give advice? I tried to find similar examples, but I can't put it in the right search terms, or something!

    Hello

    To manage the expandable nodes, you have to iterate using a FLWOR expression:

    copy $d: = $old

    Edit)

    Insert the node element extrainfo {}

    $ new/a/b

    , $ new/a.

    , $ new/e/f

    , $/ a/h new

    , for $i in $ a/new/multi-user

    will return the newnode item {$i / name, $i / type}

    } as the first in $d / root

    )

    return $d

    or, even, to get the result by replacing:

    copy $d: = $old

    Edit)

    Insert the node element extrainfo {}

    $ new/a/b

    , $ new/a.

    , $ new/e/f

    , $/ a/h new

    newnode}

    for $i in $ a/new/multi-user

    Returns the element type {}

    name of the attribute {data($i/name)}

    data($i/type)

    }

    }

    } as the first in $d / root

    )

    return $d

  • 2 cluster nodes does not work on the same esx host

    Hello

    I would like to know how it is possible for 2 VM (a cluster of application) cannot run on the same esx (for example: only 2 esx in a cluster and an esx crash!)

    Thank you

    Concerning

    You must configure the properties of the DRS cluster to add a new regulation anti-affinite.

    See cluster / properties / DRS

    André

Maybe you are looking for