JMS distributed Destinations & queues in a clustered environment

I'm a little confused trying to configure the JMS subsystem for my application.

I am migrating my configuration of WLS8.1-> 11 g and rather than use the migration tools, I created from scratch.

We have 4 JAVA virtual machine in a cluster with a store server and JMS queue on each server. The JMS servers are called JMS01, JMS02, JMS03 and JMS04

We use destinations spread to most of the lines that cater to all JMS servers 01-04, but there are a set of queues "pins" where a group of intended to JMS01, one other group is intended for the JMS02 and so on.

Thus, we distributed destinations targeted at JMS01, 2, 3, 4

and

1 JMS destination group, for the JMS01
Destination JMS 2 group, for the JMS02
Destination JMS 3 Group, for the JMS03
JMS 4 destination group, for the JMS04

Someone would be able to give me some advice on creating this configuration with modules and subdeployments in 11g?

Pete

Hi Peter,.

I highly recommend reading [JMS Configuration recommended | http://download.oracle.com/docs/cd/E15523_01/web.1111/e13738/best_practice.htm#CACJCGHG] and therefore recommend the establishment of 5 modules, each with a single 'subdeployment. A module/subdeployment each for JMS1, 2, 3 and 4 and an additional which refers to all 4 JMS servers. Alternatively, you can put all configurations in a single module that relies on multiple subdeployments, but this tends to be much more difficult to manage and understand over time (for example, its easier to keep a one-to-one correspondence between module and subdeployment). The "Best Practices" chapter is new to 11gR1PS1/10.3.2, but the advice applies to all the way back to version 9.0.

In the case where you already did, I also recommend to get hold of the new book "Professional WebLogic"

Hope this helps,

Tom

Tags: Fusion Middleware

Similar Questions

  • Problem with JMS via the SSL protocol in clustered environment

    Hello

    We run Weblogic 11 g Cluster (area) which consists of admin server and two managed server MS1, MS2.
    LIKE and run it on the computer 1, MS1 MS2 runs on machine 2. Both machines have two network interfaces, a public used for client connections and an intern for cluster communication, monitoring etc. The default channel of each Weblogic Server is listening on the internal network interface, and Moreover we have two channels (for http and t3 Protocol) configured to the public interface.
    The two managed servers are JMS provider and there is a JMS Module myModule in the field with the following JMS resources: custom connection factory myConnFactory (Load Balancing active = true = false server affinity, target: entire cluster JMS) and myQueue, which is a uniform distributed queue (targets: MS1, MS2). The queue is accessed by its logical JNDI name, but she is stuck on each managed server.

    JMS communication flows normally through t3 dedicated listening on the public interface. However, a new external client will send messages to myQueue and communication must be encrypted for security reasons. For this reason, we have implemented SSL. Instead activate a DefaultSecure channel, we left 'SSL listen Port active' = false (as the default channel would be linked to the internal network interface) and created a new channel T3SChannel t3s Protocol on the public interface for incoming client connections.

    The customer creates a t3s connection to the cluster (through T3SChannel) and gets the factory connections and the queue, use the JNDI ( source) search. The JMS connection is in real-time with MS1. If we want to create two consumers for this queue, the consumer of fist is created the MS1 and the second will be created on MS2 (thanks to active balancing). However, the creation of the second consumer fails with an exception (it is thrown on the client):

    java.rmi.ConnectException: no valid port known for: "DefaultSecure [t3s]: t3s (t3s): mserver1 - internal .company .com: 56213:null:-1 ';" No router available at destination
    at weblogic.rjvm.ConnectionManager.bootstrap(ConnectionManager.java:464)
    at weblogic.rjvm.ConnectionManager.bootstrap(ConnectionManager.java:396)
    at weblogic.rjvm.RJVMImpl.ensureConnectionEstablished(RJVMImpl.java:303)
    at weblogic.rjvm.RJVMImpl.getOutputStream(RJVMImpl.java:347)
    at weblogic.rjvm.RJVMImpl.getRequestStreamInternal(RJVMImpl.java:610)
    ... 18 more

    We were told that the exception can be avoided with t3s < default protocol > - < / default protocol > element (default is t3) added to the config.XML in the Weblogic domain. If we configure t3s as default protocol, we also need to activate the DefaultSecure channel on each server and then everything works and the customer is able to correctly create consumers.

    However, as a side effect, the entire cluster on weblogic.rjvm communication layer and then by t3s. We do not want that because internal cluster communications are set enough with other methods and it will have impact on the notable performance in the production environment. In principle, it should be possible to enable the external client to connect to the JMS provider via the channel new, safe, without affecting the existing internal communication in the cluster (which should be a black box for the customer).

    My question: is it possible to run the example described without defining the default protocol to t3s?

    Thanks for the reply.

    My question: is it possible to run the example described without defining the default protocol to t3s?

    Thanks for the very clear problem description. I checked with our customer support guru and I'm sure that the answer to your question is no, I think you have encountered a known problem and have already struck with the recommended workaround.

    That said, you can be able to avoid at least partially the problem by setting "server-affinity = true" on your CF. as you probably well know, affinity = false encourages consumer and producer traffic to route customer, on its server host connection, then possibly on a "second leap" to another server in the cluster. It looks like the attempt of an implicit downgrade of a secure request origin SSL in the first bond on a channel not secure in this second jump is to throw the exception.

    HTH,

    Tom

  • Clustering of JMS vs distributed Destinations / persistent stores and failover

    Hi again,

    I got a bit confused while working my way through the WebLogic documentation and hoped that someone could delete a few things for me.

    Is there a difference between the "JMS Clustering' and"Distributed destinations"? As far I knew "JMS Clustering" simply uses "Distributed of Destinations" to spread the JMS load through the complete cluster more using Affinity server configuration options and load balancing. Or is there something beyond that?

    I was also wondering is, what happens in persistent stores of "Distributed Destinations" in the case of a server failure. Future message will not be sent to this destination, but rather to other destinations defined in the "distributed" Destination But what happens in the persistent store that is associated with the destination and messages persisted in it? It seems to me that they are lost, unless the server failed or a replacement instance is reduced upward migration of the server migrates the entire instance to another machine, in which case the destination must always be available and functioning correctly. Is this correct? Or are there other ways to retrieve stored messages? Or several destinations share the same persistent, as store using the same data source configuration?

    Thank you
    Chris

    You are basically correct. JMS clustering is implemented by distributed Destinations.

    When a managed server goes down, messages JMS if stored locally will be put on hold until you raise your server again. If you want to implement server or the migration of service shops of persistence must be on some type of SAN, so messages will be consumed once the migration is complete.

    Hi again,

    I got a bit confused while working my way through the WebLogic documentation and hoped that someone could delete a few things for me.

    Is there a difference between the "JMS Clustering' and"Distributed destinations"? As far I knew "JMS Clustering" simply uses "Distributed of Destinations" to spread the JMS load through the complete cluster more using Affinity server configuration options and load balancing. Or is there something beyond that?

    I was also wondering is, what happens in persistent stores of "Distributed Destinations" in the case of a server failure. Future message will not be sent to this destination, but rather to other destinations defined in the "distributed" Destination But what happens in the persistent store that is associated with the destination and messages persisted in it? It seems to me that they are lost, unless the server failed or a replacement instance is reduced upward migration of the server migrates the entire instance to another machine, in which case the destination must always be available and functioning correctly. Is this correct? Or are there other ways to retrieve stored messages? Or several destinations share the same persistent, as store using the same data source configuration?

  • Deploying files proxy services in clustered environment

    Hello

    To deploy a file proxy service in a clustered environment, it is necessary to deal specifically in the implementation of the proxy service to ensure that the files can be processed by the managed servers.

    If I understand correctly, you can create a distributed queue which is linking to wlsb.internal.transport.task.queue.file and local queues on managed servers can be added as members of the distributed queue.

    Currently, the file is getting treated via the server managed, which is the target server poller for the proxy file service. The file is not be treated if the file passes through the other managed server and remains in the repertoire of the scene.

    Any help will be appreciated. Please let me know if you count more details.

    Thank you

    At SKB: Hard to believe none of the Maury of post was helpful or appropriate. Please, mark to help other readers in the future.

  • Task human BPM process not made (in a clustered environment)

    With the help of jDev version 12.2.1.

    I have an application that is deployed in a clustered environment.

    Any ADF the BPM is trying to load the list of tasks BPM got, go back a blank with

    Not found

    The requested URL /workflow/DOO_Simulation_ProjectUI/faces/adf.task-flow was not found.


    This happens for my human tasks but also for the tasks of duties predefined we can create using BPM Workspace.

    The ADF hwtaskflow points to the HTTP server on the host name when this happens.

    If I change the host name to one of the nodes in the cluster, however, makes the page.

    Does anyone have suggestions on what this might be? The server logs are not information about this error.

    Thank you

    Ok. So I got the answer.

    I had to allow the connections/workflow / * my OHS.

    I also had to add to my web.xml file:

    CLIENT-CERT

    MYREALM

  • 11 GR 2 Grid Infrastructure manual start/stop in clustered environment

    Hello

    I have installation 11 GR 2 grid in both a stand-alone environment and a cluster environment and I am looking at some differences regarding post-market manually in these two environments

    In the stand-alone environment, manually stop really just composed of-

    ASM - stop I would use generally "srvctl stop asm ' to do

    Stopping HAS (Oracle Restart) - using "crsctl stop at".

    Would be just the reverse of these

    However things in the clustered environment seem to be a whole different kettle of fish!

    First of all I appreciate this start manual switch of ASM and Cluster resource (in stand-alone environments and Clustered) is not really essential for the operation on the day the day (it's really for food issues) and a server startup or stop will do the necessary start / stop services - so somehow this manual process is a bit contrived for the race on the day the day

    Q1. Given that there are so many different levels of process / demons for the clustered environment - is there still a viable manual start for 11 GR 2 GI procedure in a clustered environment?

    Any help in identifying the manual order to start / stop would be appreciated?

    Thank you

    Jim

    Hello

    You can stop/start the cluster and all resources with "crsctl stop/start crs. There is no need to manually stop the asm, it will be managed by clusterware, as well as the dbs, auditors, etc.

    You can use "crsctl stat res t ' for an overview of the resources managed by the crs.

    "crsctl stat res ora.your_dbname.db Pei" gives you detailed information about the resource, including the start and stop option. Regularly, this should be the immediate mode for the dbs, if you're fine with this mode of closure you can simply use crsctl stop crs to stop everything on a single node.

    Concerning

    Thomas

  • Common cause of failure of the Mirage server in clustered environment and how customers will be switched to the other server in a cluster

    Hello

    Can someone share me information about common cause of failure of the Mirage server in clustered environment.

    And how customers will be switched to the other server in a cluster to continue their operations from the server failed.

    Kind regards

    C Bathesha

    In general, Mirage servers are not lacking. It is very rare (and not, for example, to problems of storage or endpoitns, which are more common).

    May raise problems of overload, memory or hardware malfunction too little.

    After that you make server standard troubleshooting (the etc system event log), you must file a Service request by VMware.

  • SOA 11 g clustered environment Ant scripts

    Hi all

    I just want to know, when deploying a cluster in soa 11 g using Ant, what I need to add the property oc4jinstancename as how we did in soa 10 g for ants?

    Can someone explain and maybe example when the deployment of 11g ant of soa to Cluster Configuration of the environment.

    Thank you
    K

    Hi K,

    According to my knowledge, you don't need to make additional configurations for this. We have developed scripts of Ant for non-clustered environments both absolutely working on the clustered environment. We referred unique soa_server1 in the deployment scripts. The default rule of consistency of Weblogic replicates the deployment on all nodes in a cluster, where it has been deployed in second instance soa_server2 automatically.

    Try and let me know if you face any issues.

    Rehards,
    Neeraj Sehgal

  • weblogicimportmetadata eventhandlers in a clustered environment

    I'm migrating a dev environment of single to a clustered production environment node. In dev, I ran weblogicImportMetadata to import event handlers. In the clustered environment, I need to run weblogicImportMetadata once per node, or once a cluster on each node? Thank you.

    You must import it only once. Cluster or cluster, it points to a single database, and only once. It goes the same for plugins and jar post. As long as you aren't all copy to a physical host, you don't need to repeat the steps on all servers in the cluster.

    -Marie

  • Distributed theme and messages duplicated clustered BMD

    It's weird, no? I think that it is a distortion of 'Destination topic' purposes.

    Documentation WLS:

    Deployment on a topic Message-driven beans distributed*.

    When an MDB is deployed on a distributed subject and is intended for a WebLogic Server instance in a cluster that hosts two members of the section distributed on a JMS server, the MDB gets deployed to the members of the distributed subject. This happens because the multilateral development banks are pinned to the distributed topic destination name of a member.

    + That is why, you will receive [number of sent messages] * [number of members of the distributed subject] more messages by MDB, depending on how may distributed section members are deployed on a WebLogic Server instance. For example, if a JMS server contains two distributed section members, then two multilateral development banks are deployed, one for each Member, so you will receive twice as much messages. +

    There are two cases of very common use for the category multilateral development banks:

    * One-copy-by-server: all servers that hosts the MDB Gets a copy of every message posted. (In fact, each server creates a subscription.)

    * A copy - by Application: exactly a server that hosts the MDB to obtain a copy of any published including message. (Servers share a single subscription somehow, or maybe share more partitioned subscriptions).

    After your comment, I assume you are looking for this last and not the first. It is possible to get a One-copy-by-Application behavior, sometimes using one subscription on a topic that is distributed with a kind of transfer to an interim queue, sometimes via other means.

    If you have the luxury of waiting, there is a set of enhancements to WL JMS and MDB WL in the next 10.3.4 release that are specifically designed to attack one copy - Application use cases. (WL10.3.4 is also known as the 11gR1PS3 name - eg. the next series of patch).

    Tom

  • Distributed drinking queue of the CEP of the Oracle WebLogic

    I use a weblogic JMS queue, distributed in an Application PART, but I'm having some problems with it.
    I have a SOA_Suite 11 g R3 Cluster (of two nodes on different machines) with a distributed queue, and I'm trying to write and consume messages from him. the problem I have is that the production of messages made not balanced, in fact he writes only to the node number 1. To consume these messages that I have to do by correlation ID, so I have a bean on my CEP application that does this. But he cannot find all messages, even if with the correlation ID messages are in the queue, I think it is maybe a configuration I'm missing on the JMS consumer, I'll appreciate any help.
    Thank you.
    Pablo

    With the dynamic approach your taking to deliver a specific message to a specific thread, combined with the fact that the message you are interested in apparently can end up on members, you will need to create a consumer with the correlation id selector directly each Member of the specific queue so that you can check each Member for the message of your choice. The members are each JNDI individually announced as "jms-server-name@udd-jndi-name".

    Alternatively, you can take a look at JMS ReplyTo - RequestQ and ResponseQ are in different managed servers for some advice on some typical answer/response and correlation of the use cases.

    Note that if performance is of concern, then you should be aware that often creating/closing consumer JMS is relatively expensive and, depending on the frequency, can even cause a slowdown of flow multi-X...

    Kind regards

    Tom

  • Suggestions to account for the vCenter for capacity and future deployments clustered environment

    Hello

    I have currently multiple vCenter servers manage multiple HA clusters.  I recently was responsible for providing a weekly report of each environment and more precisely to determine the current capacity and "how many more VM can be added before requiring new hardware" I did some research and I know that there are some products on the market that can generate usage reports.  There is also the capacityIQ application that can be useful for me as well.  I wanted to ask if anyone has any good suggestions on free reporting scripts that can be modified or programs may be pre-packaged that would provide the necessary reports I want to generate.  Basically, I need to know the total of the resources available in a given cluster (CPU, RAM, storage) and how much is currently attributed use put in service as long as real.  The goal is to take a very high-level overview of what resources we have and then determine how much additional VM can be added to a cluster until we would have additional material available.  Again, I think this falls under the guidelines of capacity planning, but the user experience in the real world is always the best resource for weeding through all of the announced computer.  Our management team wants a brief summary as well when our capacity at 75% (CPU/RAM) then we would order additional material and a way to make account information automatically on a weekly basis.  Naturally, as any team management would say free is the best as long as it can be useful, but if there really good third-party programs they would be willing to bend .  I thank you for anyones personal experiences and recommendations with that kind of statement, or if they know, all the scripts, we can use that would be great

    Thanks in advance,

    Chris

    You may want to look at Capacity Analyzer, it is free to try or Modeler of capacity that is free, but I don't the have not used.

  • Delay in the distributed forward queue attribute

    Dear experts,

    I have a question in mind about this attribute/property that you can set inside the line distributed in the administration console.

    According to the documentation/help for WebLogic 10.3
    This attribute (delay) has the following features:
    The number of seconds after which a member of the uniform of queues distributed with no consumer appliance will wait before his messages to other members of uniform distributed queues that have no consumer.

    Interpreting above, what I understand of the concept is that if a member in the distributed queue have not any consumer, the messages it will be passed on to other members with consumers. However, if a member who is already associated with a consumer and the consumer is killed after a period of time, will be transferred messages from that Member to other members with consumers?

    Since a member who, with no consumer is not considered for the production of message.

    Thank you.

    However, if a member who is already associated with a consumer and the consumer is killed after a period of time, will be transferred messages from that Member to other members with consumers?

    Yes.

    Tom

  • Repository migration to a secondary server in clustered environment

    I implemented a clustered of QA following environment
    Machine 1 - main cluster service, host of Java, Presentation Server and the BI server. This server also has the master repository
    Machine 2 - secondary cluster service, host of Java, Presentation Server and the BI server.

    Everything works fine. Now, I'm trying migrate repository updated dev to QA. I dropped all services, copied the new repositoty to 3 - Reposioty directory on server 1, shared location and directory of repository on the secondary server.

    When I do the service to the top the machine 2, the repository on machine 2 is returned to the previous version of the repository. I see this by noting that the date of the last modification of the repository gets changed at an early date. SO I understand that machine 2 tries to synchronize with the master repository on the computer 1. But why is it synchronize with an older version of the repository when I placed the new repository as master?

    Any help is greatly appreciated

    Published by: VNC on January 25, 2010 11:49

    In a cluster environment, when we modify the repository on the online mode, it will create files in the shared location. The file name will be "> .rpd. " >". This file will reflect the secondary repository when you reboot the secondary server to BI.
    "In your case, you have replaced the RPD file in 3 places, but even these".rpd. > "files reside there. reflects this file in the secondary server. So, you can remove the primary and secondary location .sav files, and also remove all the .rpd. > files from the shared location. That will solve your problem.

  • SSL certificates for a clustered environment

    Hi all
    I have a fairly large area in an environment with an Admin Server and 6 managed servers.
    Managed servers are distributed on two physical machines with the first machine, thus holding the administration server.
    Each pair of servers is joined in a cluster, so I have 3 groups, each with a single application.

    Now some of the communication must be done rained ssl and I wonder about the configuration. First of all, I should
    Note that these certificates will be not visible to a client (browser), they will only be used for the communication of internal demand.

    So, do I need a certificate for each managed to identity of key server? Or what can I use the same certificate for each of them?
    They will all be available under the url, under a few layers of routers. If I use the same certificate that I can use the on the
    router, the customers see as well? What I can or I should?

    You only need to tell nodemanager where to find its certificates. If you have already chosen SSL for your nodemanager, by default it then uses the democerts that come with WL. But you really want to use these...

    So in your nodemanager properties, use something like:

    #
    # SSL configuration
    #
    Keystore = CustomIdentityAndJavaStandardTrust
    CustomIdentityAlias = your_cert_alias
    CustomIdentityKeyStoreFileName = full_path_to_your_identity_keystore_used_by_your_mgd_server
    CustomIdentityKeyStorePassPhrase = your_storepass
    CustomIdentityKeyStoreType = jks
    CustomIdentityPrivateKeyPassPhrase = your_keypass

    This indicates your nodemanager use the same identity that your servers managed. Since it is using java standard trust, it shares the same "cacerts" as the application server. In the console, your Machine-> Configuration-> Node Manager-> Type would be SSL.

    So, it would be all that is necessary for the nodemanager.

    In your trusted keys file, you can simply add the signatory / cert ca root for your certificates or you can add individual server certificates if you want to restrict the confidence a little more away. Normally certificates of identity will expire more frequently than the root certificates, so I do not identity certificates in the trust store since that simply means more maintenance when they expire.

Maybe you are looking for