WebSphere - Thread Pool - SOAP Connector

Is there a way to control the number of thread pool of SOAP connector rather than the use of the thread pool?

Agree with Robert, what you want to do is to copy the original rule and just edit the copy to look at #activeCount.

You may need to disable the original rule (if this doesn't work the way you want), or add a condition on the rule changed to check the name contains the word "SOAP".

Golan

Tags: Dell Tech

Similar Questions

  • Fixed size thread pool with the exception of the additional tasks, then it must

    Hello
    I have the following code in a simple program (code below)

    BlockingQueue < Runnable > q = new < Runnable > ArrayBlockingQueue (10, false);
    NewPool ThreadPoolExecutor = new ThreadPoolExecutor (1, 10, 20, TimeUnit.SECONDS, q);

    for (int x = 0; x < 30; x ++) {}
    newPool.execute (new threaded());
    }


    My understanding is that this should create a pool of threads that can accept 10 tasks, once there were 10 submitted tasks I should get RejectedExecutionException, however; I see that when I run the code the pool accepts 20 run calls before launching RejectedExecutionException. I am on Windows 7 using Java 1.6.0_21

    Any thoughts on what I am doing wrong?

    Thank you

    -----


    java.util.concurrent import. *;

    public class ThreadPoolTest {}
    Public NotInheritable class threaded implements Runnable {}

    @Override
    public void run() {}
    System.out.println ("wire:" + Thread.currentThread () .getId ());
    try {}
    Thread.Sleep (5000);
    } catch (InterruptedException e) {}
    System.out.println ("Thread:" + Thread.currentThread () .getId ())
    "+"interuptted");
    }
    System.out.println ("output wire:" + Thread.currentThread () .getId ());
    }

    }

    private static int MAX = 10;
    private pool of the executor;


    public ThreadPoolTest() {}
    Super();
    BlockingQueue < Runnable > q is new ArrayBlockingQueue < Runnable > (MAX/2, false);.
    NewPool ThreadPoolExecutor = new ThreadPoolExecutor (1, MAX, 20, TimeUnit.SECONDS, q);

    pool = newPool;
    }



    /**
    @param args
    */
    Public Shared Sub main (String [] args) {}
    Object ThreadPoolTest = new ThreadPoolTest();
    object.doThreads ();
    }

    private void doThreads() {}
    int argued = 0, rejected = 0;
    for (int x = 0; x < MAX * 3; x ++) {}
    try {}
    System.out.println (Integer.ToString (x) + "sending");
    pool. Execute (new threaded());
    submitted ++;
    }
    {} catch (RejectedExecutionException re)
    System.Err.println ("Submission" + x + "rejected");
    rejected ++;
    }
    }

    System.out.println ("\n\nSubmitted:" + MAX * 2);
    System.out.println ("Accepted:" + subject);
    System.out.println ("rejected:" + rejected);
    }

    }

    Interesting, it seems to not dismiss after (size of queue + size of thread pool) supports.

    --
    In fact, it's not interesting. The queue is the buffer and the thread pool size is job tasks.
    He uses the thread pool, the rest first upward, puts it on the queue for a free thread. If both are filled, so there is no space and it rejects submit it.

    Published by: Kayaman April 4, 2011 20:39

  • WebSphere Message Driven Bean pool lack of warning

    I receive after alarm frequently to an application running on WAS v7.

    Subject: JVMNAME_c1: WebSphere Message Driven Bean pool lack of warning

    Message Driven Bean 'JVMNAME_MDB' in the 'APPLICATIONEAR' application server 'JVM1' has a high percentage of pool narrowly with a percentage of 33% Miss. This exceeds the threshold of 25% warning. Please use the following URL for details of the alarm: https://xxxxxxxxxxx.

    According to the Foglight documents:

    If you experience performance problems, check the following:

    The minimum value of the pool size is set too low, so the container must keep naming new swimming entity bean

    ·         Try increasing the minimum size of pool on your application.

    The maximum value of the pool size is set too low, so any thread tries to get an entity bean from the pool to wait in the queue

    ·         Try to increase the number of maximum pool size on your application. For example, assign something exceeds the maximum number of concurrent users.

    I tried to increase the minimum values and maximum pool one at a time, but it did not help. The pool, I changed values to is the Thread Pool found here:

    Application servers--> "server_name"--> service listening for Message--> thread pool.

    Is this the correct pool, I should be hilarious? OR should I change the found here - QCF connection/session pool resources--> JMS--> factories of connection of the queue--> "QCF_NAME"--> Session/connection pools?

    Thanks in advance.

    You have access to the developer or the person who developed the archive application?

    In general as part of an ear or application jar file there are definitions of the minimum number of Maximum number of EJB and EJB in the pool, I think they are in WebSphere ejb - jar.xml and should look something like this.

    2000

    1000

    I see this in the IBM site on weblogic for websphere migration which can give more info

    http://www.IBM.com/developerWorks/WebSphere/library/techarticles/0706_vines/0706_vines.html#sec4c

    C. max-beans-in-free-pool and initial-beans-in-free-pool

    The element of max-beans-in free-pool sets the maximum size of the EJB free pool for a specific entity bean, bean session without State or message-driven bean. Each bean class can define the size of its own pool for free using this element.

    The element initial-beans on free-pool sets the initial number of instances of bean that will fill the free pool for a specific startup bean class. Filling the free pool with bodies improves initial response times, as initial applications for the bean can be met without generating a new instance.

    In WebLogic, every bean in the application can specify that its initial and maximum pool size and pool size may vary considerably from one Bean to another class.  To determine the specific pool size, see the weblogic-ejb - jar.xml for one of these items:


    To specify the size of the EJB pool in WebSphere Application Server, you can add an argument of the system to the JVM:

    1. In the administration console, select application-online-online Process Definition servername server => Java Virtual Machine.
    2. Add com.ibm.websphere.ejbcontainer.poolSize to the area of Generic JVM arguments, after all the existing JVM arguments. Here is an example that sets the size of the pool maximum and minimum of 1-5 for a bean called RuleEngineEJB:
    -Dcom.ibm.websphere.ejbcontainer.poolSize = myapp #RulesEngine.jar #example. RulesEngineEJB = 1, 5

    For more details, see WebSphere Tuning EJB container .

    Hope this helps

    Golan

  • Setting up a pool of threads for several endpoints GWWS

    Hello

    I was setting up an application of salt on Tuxedo 12 and I have my doubts that I would like to understand a little better:

    1. when I set up an Instance of GW in my Config of salt, if I have two incoming links and I put the thread for 20 pool, does that I'll have 20 threads in the pool for the two endpoints and salt could use them as requested? Or 20 for each? Is it possible to configure a pool of threads for each endpoint?

    For example, can a saltconfig like a bellows, I set up a pool of threads for tux_test and one for tux_test_2 without the need for another GWWS?

    < GWInstance id = "tux_test" >

    < inbound >

    < link ref = "tux_test:tux_test_Binding" >

    < use endpoint = "tux_test_GWWS1_HTTPPort" > < / endpoint >

    < / binding >

    < link ref = "tux_test2:tux_test_Binding2" >

    < use endpoint = "tux_test2_GWWS1_HTTPPort" > < / endpoint >

    < / binding >

    < / entrants >

    Properties of <>

    < property name = "thread_pool_size" value = "20" / >

    < property name = "enableSOAPValidation" value = "true" / >

    < / properties >

    < / GWInstance >

    2. I would like to understand a little better the GWWS communication with my tuxedo service. When the incoming request arrives in the Instance of GW, it calls the service in a synchronous call (tpcall?), but I was wondering if there is a way to make this communication a little different (like using a tpenqueue and release the GWWS thread while the service is running, asynchronous call). Is there any configuration to establish communication between GWWS-> Tuxedo ATMI service in a way asynch?

    Thanks in advance for the help,


    brunno Attorre

    Hello Brunno,

    1 - the thread pool is the process by GWWS.

    2. in-house GWWS operates asynchronously management, i.e. it does not expect a request to complete before dealing with the other. Threads are occupied only when there is so much to do, for example when a request is waiting for a service, that the frame is turned until the answer comes back then a thread from the pool will resume processing.

    Did you observe that the fact that it is not the case

    In any case, if you are after performance and don't mind mistakes not being taken earlier I suggest to disable the validation of SOAP which is quite expensive and usually only useful in development mode.

    Let us know if you have any other questions.

    Thank you

    Maurice

  • Question of WebSphere Application Server 6.1 (after application of the fix pack)

    Hyperic HQ Server and Agent are installed on Linux, both are at version 3.2.2. I use Sun j2sdk1.4.2_16 to run the server and java 1.5.0 IBM to run the agent (for WebSphere monitoring). Before applying a patch for WebSphere 6.1 (version 6.1.0.0), Hyperic can monitor WebSphere 6.1 successfully, but after you apply the patch fix 15 (version 6.1.0.15), Hyperic can no longer be configured correctly, the error I got is like below.

    Unexpected error running autodiscoverer for the plugin: WebSphere 6.1 Admin: ADMC0016E: the system cannot create a SOAP connector to connect to the host localhost port 8879.

    Is there anyone who knows what this problem is? Help, please!

    Thank you

    The Solution is to fill it with soap.client.properties and ssl.client.properties.

    When you start the agent and then make sure he finds the files you want (/conf/agent.properties set loglevel debug).

    Subsequently, it should work.

    concerning
    Roadrunner

  • thread caused general protection

    Hello

    I am runing a CVI user is interfaced with 8 thread running the test at the same time. Once in awhile, I get a FATAL RUN-TIME ERROR, see attachment. I am runnig CVI2010 on windows 7 and 4 GB of memory.

    If someone could help?

    Kind regards

    Javier

    When it comes to multithreaded applications, a number of elements plays a significant role in how threads are running. Using multi-core processors is a sine qua non for these types of applications when you leave the base level and move using several threads or CPU tasks.

    Will on the software side, first you must carefully exclude your routines threaded parts all of your time which can be postponed or moved to the main thread.

    Second, you could try to increase the execution time of functions calling SetSleepPolicy (VAL_SLEEP_NONE) thread; entering them; This usually has better results if you create a thread pool and specify CmtSetThreadPoolAttribute (ThreadPool, ATTR_TP_PROCESS_EVENTS_WHILE_WAITING, TRUE); for the thread pool.

    In addition, priority of individual threads can be set if you use CmtScheduleThreadPoolFunctionAdv: see this help topic and the related arguments in help online. I'd train first sleep policy and playing woth thread priority only in the case where you keep having errors.

    A condition sine qua non of all this is that you are sure that you have no code in your application and the GPF issues is due only to the processor load. You must debug especially all of your calls accessing the resource in order to be sure that no simultaneous access to shared resources, variables or the blocks of memory may be possible; using variables in thread-safe or locks to prevent access simultaneously by different threads to critical objects can help in this matter.

  • Thread thread function ID vs ID

    Hello

    I'm afraid I don't really understand the difference between a thread ID and an ID of service of thread or formulated differently, what is the purpose of the thread ID and why both are needed?

    For example, use CmtScheduleThreadPoolFunction returns a thread ID function, which is used later to get a thread ID.

    Function calls such as PostDeferredCallToThread, the identifier of the thread is necessary, which can be obtained only for the thread underway through CmtGetCurrentThreadID, in the contrary case, to know the ID of thread function to get the thread what ID... seems a bit complicated

    -Why can't I use the function thread ID only?

    Thank you in advance for the explanation...

    Wolfgang

    I think the analogy works, but use of "thread-ID function" Wolfgang is a bit mixed upward:

    employee == thread (has a thread ID)

    task is function thread (a thread ID of function)

    The thread pool maintains a lot of idle threads that are sitting around waiting for work (walking on the intended function). When a thread finishes its task, it is not destroyed; It just goes back in a State of inactivity waiting for more work. For this reason, it makes sense that the thread ID persists. On the other hand, if the same function is scheduled to run repeatedly, every time that it runs, it will have a new ID of thread function. A thread function ID represents a particular instance of a service running on a ThreadPool thread.

    A. Mert

    National Instruments

  • Cannot access memory at address 0 x 0 at start of thread for the connection to build...

    Here is my code,

    void AlertListPage::onOnCallClicked()
    {
        thread = new DownLoadThread();
        QThreadPool *pool = new QThreadPool();
        pool->start(thread,0);
    }
    
    class DownLoadThread: public QObject, public QRunnable {
        Q_OBJECT
    public:
        DownLoadThread();
        virtual ~DownLoadThread();
        virtual void run();
    ...
    ...
    }
    
    cpp...
    
    DownLoadThread::DownLoadThread() {
        // TODO Auto-generated constructor stub
    
    }
    
    DownLoadThread::~DownLoadThread() {
        // TODO Auto-generated destructor stub
        sta::out<<"test";
    }
    
    void DownLoadThread::run()
    {
    ...
    ...
    }
    

    I set the breakpoint at the line:

    pool->

    start at (wire, 0);

    and the first line within the Run() method.

    run once OOP-> start and before running, it rises can not access memory at address 0 x 0.

    I do not see where I put the object as null...

    Thank you.

    Do you need a thread pool and QRunnable? You can also use QThread-> http://qt-project.org/doc/qt-4.8/QThread.html

    You initialize QObject in constructor? It's not like so, cause your Builder does not need a QObject as a parent. Then maybe parent is set to NULL.

  • The number of threads can run simultaneously

    I have 5 (tasks of services) to do at the same time.

    Now, when I introduce a sixth (timer tasks: either to demand once or fixed rate), it is blocked.

    Although I can run this service in an existing task, but it is sinchronous: unless and until the other subtask is complete, you cannot continue with this new task of service required.

    I want to know was looking for there is a limitation on the number of concurrent worker threads?

    If I have a thread that as well, but which is also imposing an expectation...

    I know both executors of thread pool, but it is not backwards compatible...

    Please enlighten me with the knowledge of concurrent threads in Blackberry.

    Concerning

    Yes Mr President,

    Son to solve certain problems. But there are more discussions, more they consume processor. Device becomes slow.

    I've solved the problem using invokelater, son and a scheduler as runnables.

    Demand will slow down, but performs.

    It is recommended not to extend the class main task of the timer, but always make a new instance of a class that extends the task timer, which was that block my threads.

    Best regards.

  • ERROR, called from the thread NON-INTERFACE QThread

    I am trying to load an image using QtConcurrent:map at the same time and I received this error.

    WARNING: ApplicationPrivate::context: ERROR called NON-INTERFACE QThread thread (0xf9ff68, name = "thread (pooled)" ")
    Fatal: ApplicationPrivate::context: method called from the UI thread

    I simplified the code as follows:

    imagescaler::queuescaling{
    ..    mimageScaling = new QFutureWatcher(this);
    mimageScaling->setFuture(QtConcurrent::mapped(mImageList, scaledImage));
    ..
    }
    
    Image scaledImage(QString file_name) {
       QImage qImage;
       bb::ImageData imageData(bb::PixelFormat::RGBA_Premultiplied, qImage.width(),
      qImage.height());  // causes error
      return imageData;
    }
    

    Anyone know what the problem is or how to work around?

    Thanks for point out the bb:ImageData.  Never noticed the incompatibility.  Image converts ImageData to initialization, so it does not create an error.  However, if I return an ImageData rather than the Image it does not crash.  I then simply convert ImageData of Image in the UI thread and it works very well.

    I'm curious to know why I can return a blank image and I can return an imageData, but I can't go back an Image that's been converted from ImageData.  Do not understand what can or cannot be done outside the UI thread.

  • What role put on with the name of game of pool-n-wire-m of form?

    Hello

    I 5.0.97 version.

    Looking for threads created by a node in the role of master , I don't see a single thread without a descriptive name. It has the default value of the executor of pool-n-wire-m.

    What role does this thread?  I ask out of curiosity.

    "MASTER node1 (1)" daemon prio = 6 tid = 0x0000000008d07800 nest = 0x26cc pending on the condition [0x000000000a52f000]

    "Output power for node2" daemon prio = 6 tid = 0x0000000009a9b800 0x1e20 = nest waiting on condition [0x000000000a82f000]

    "Entry of feeder for node2" daemon prio = 6 tid = 0x0000000009a1d000 nest = executable 0x25cc [0x000000000a72f000]

    "MASTER node1 (1)" daemon prio = 6 tid = 0x0000000008d07800 nest = 0x26cc pending on the condition [0x000000000a52f000]

    "Acceptor Thread node1" daemon prio = 6 tid = 0x0000000008aed000 nest = 0 x 2248 pending to the condition [0x000000000a42f000]

    "Learning Thread node1" daemon prio = 6 tid = 0x0000000008ad1000 nest = 0 x 2058 waiting on condition [0x000000000a32f000]

    "ServiceDispatcher - localhost:5551"daemon prio = 6 = 0x0000000008a8a800 nest tid = 0 x 2508 executable [0x00000000097ff000]

    "Checkpointer" daemon prio = 6 tid = 0x0000000008a8a000 nest 0x9a0 = in Object.wait () [0x00000000096ff000]

    "Cleaner-1" daemon prio = 6 tid = 0x0000000008a0c000 nest 0x16cc = in Object.wait () [0x00000000095ff000]

    "INCompressor" daemon prio = 6 tid = 0x0000000008b4e000 nest = 0 x 2560 in Object.wait () [0x00000000094ff000]

    "pool-1-wire-1" prio = 6 tid = 0x00000000070ff800 0x222c = nest waiting on condition [0x000000000a62f000]

    Could you consider naming this thread in a later version? It would help with diagnosing problems by allowing users to know with certainty that threads belong to the EJ.

    Best regards, Keith Wall.

    Keith,

    These discussions are part of the thread pool that watches for network connection requests incoming and help establish new sockets. We should certainly their name and will do so in the next version. I think they'll be named ServiceDispatcher .

    Kind regards

    Linda

  • Service connections include JMX plugin "SOAP."

    If I put this in a JMX plugin to connect to a properly configured on 'hostname' JMX server, it will connect?

    "service: jmx:soap://hostname:8080/axis/services.

    BJ

    Hi BJ,.

    We have tried with the jmxmp connector, see the bottom of:
    http://support.Hyperic.com/display/hypcomm/advanced+JMX+plugins

    It is for use with the existing jmx - plugin.xml and tested without doubt at the time with a 1.4 jre.
    If this does not work in your plugin.jar/lib directory, try adding SOAP connector jar (s) to pdk/lib /.

  • Number crazy spawning of threads of writeback 3.7.6.1 consistency

    Hello
    I have a problem with consistency 3.7.1.6:
    I have a writeback cache distributed, and it happens that, for each new NamedCache I get when running (my application creates dynamically caches), a new Thread of consistency writeback is spawn:
    Dump wire of one of these sons are:
    Name: WriteBehindThread:CacheStoreWrapper(MyCacheStore):DistributedCache
    State: TIMED_WAITING on com.tangosol.net.cache.ReadWriteBackingMap$WriteQueue@499fcbfd
    Total blocked: 0  Total waited: 2.011
    
    Stack trace: 
     java.lang.Object.wait(Native Method)
    com.tangosol.net.cache.ReadWriteBackingMap.waitFor(ReadWriteBackingMap.java:1332)
    com.tangosol.net.cache.ReadWriteBackingMap$WriteQueue.remove(ReadWriteBackingMap.java:3485)
    com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:4163)
    com.tangosol.util.Daemon$DaemonWorker.run(Daemon.java:803)
    java.lang.Thread.run(Unknown Source)
    This is my setup:
           <near-scheme>
                <scheme-name>rlb</scheme-name>
                
                <invalidation-strategy>auto</invalidation-strategy>
                <front-scheme>
                    <local-scheme>
                        <scheme-ref>localL</scheme-ref>
                    </local-scheme>
                </front-scheme>
                <back-scheme>
                    <distributed-scheme>
                        <scheme-ref>backing_store_wb</scheme-ref>
                    </distributed-scheme>
                </back-scheme>
                <autostart>true</autostart>
            </near-scheme>
    
           <distributed-scheme>
                <scheme-name>backing_store_wb</scheme-name>
                <backup-count>1</backup-count>
                
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                        <scheme-name>PersistenceCallbackScheme</scheme-name>
                        <internal-cache-scheme>
                            <local-scheme>
                                <scheme-ref>localH</scheme-ref>
                            </local-scheme>
                        </internal-cache-scheme>
                        <cachestore-scheme>
    
                            <class-scheme>
                                <class-name>MySacheStore</class-name>
                                <init-params>
                                    <init-param>
                                        <param-name>cacheName</param-name>
                                        <param-type>java.lang.String</param-type>
                                        <param-value>{cache-name}</param-value>
                                    </init-param>
                                    <init-param>
                                        <param-type>{cache-ref}</param-type>
                                        <param-value>persistence_info_cache</param-value>
                                    </init-param>
                                </init-params>
                            </class-scheme>
                        </cachestore-scheme>
                        <write-delay>5s</write-delay>
                        <write-batch-factor>0.2</write-batch-factor>
                        <write-requeue-threshold>10</write-requeue-threshold>
                    </read-write-backing-map-scheme>
    
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
    
            <local-scheme>
                <scheme-name>localL</scheme-name>
                <eviction-policy>HYBRID</eviction-policy>
                <high-units>200m</high-units>
                <low-units>150m</low-units>
                <unit-calculator>FIXED</unit-calculator>
                <expiry-delay>1h</expiry-delay>
            </local-scheme>
    Is this normal? For what is written in the post, I answer to, coherence must use a thread pool to perform writeback for all caches belonging to the same service, am I bad?

    Published by: e.gherardini on May 24, 2013 16:06

    Hello

    Thread pool service, which is optional and is configured through tags , is shared between all hides. It's true.
    But does writing deferred, the thread from the pool of service is responsible to the entrance of the queue. Writeback is the responsibility of the ReadWriteBackingMap instance.
    A separate instance of ReadWriteBackingMap is created for each cache, and each create a single instance of the write-back thread.

    So, you are looking at is expected behavior.

    UPDATE:

    If your caches are temporary, you're caching release writeback cache (and other related resources) by calling NamedCache.destroy () on that unused caches.

    Kind regards
    Alexey

    Published by: alexey.ragozin on May 24, 2013 19:33

  • Thread length of queue count in negative and too many Thread waiting

    We use Weblogic server 9.2.2 with 1 admin server and managed server 4. Currently, in one of the servers, I saw that there is about 77 waiting threads.


    Home > summary of the servers > server1 > surveillance > discussions > autoréglant Thread Pool

    I could see that the "length of the queue' is negative (-138) and auto tuning sleep thread count is 77. Large number of threads WAITING thread persists during the busy time of the times while other servers is fully used.


    Is it normal to have the length of the queue that is negative and so many threads on HOLD? With regard to the JMS queue negative oracle had already recognized this is a bug. Thank you.

    Published by: 855849 on May 1st, 2011 07:19

    Published by: SilverHawk may 12, 2011 08:12

    Is it normal to have the length of the queue that is negative

    No, it is not normal. Looks like probably a bug

  • ScheduledThreadPoolExecutor with minimum and maximum thread

    Hi all

    I need to implement something that is planning a number of potentially important small tasks over time. I would use a pool of threads to do this in an effective manner. I wish to have a configuration, for example, 1-10 of discussions where the threads are created as needed and if the number of threads max is reached, tasks are queued. The tasks are timed and repeat if I prefer the automated programming of the executor. The deployment environment and the number of tasks to manage will vary so adaptation of configuration and execution is important. When there is no work, I would like to scale down over time as well. Here is my small tent to reach a suitable with competition from Java API design...

    Now, looking at my options, it seems that ScheduledThreadPoolExecutor is a good match for my needs. So what I would like to implement is a mechanism to define the minimum and maximum number of threads that are created for the thread pool. I understand that it is the number of threads of"base" and the maximum number of threads in the java.concurrency terminology. However, I struggle to create what I would.

    It seems to me that the ScheduledThreadPoolExecutor is always created as a fixed size thread pool. So I can't set that I would have say minimum of threads 1 and a maximum of 10. There is the setMaximumPoolSize() method, I can call on the executor, but he seems to have no effect. If I create my own ThreadFactory, and use that to give me track when new threads are created, create a ScheduledThreadPoolExecutor with one set of threads 1, fixed the limit 2-wire and plan a set of 6 ongoing tasks in loops, it never starts higher than 1 threads. So it seems that it completely ignores the setMaximumPoolSize().

    There is the option to call allowCoreThreadTimeOut() and setKeepAliveTime() on the executor. When I look in the code of ScheduledThreadPoolExecutor, I also see that he prestarts always all discussions until number of threads core set when a task is scheduled. So even if I got the coreThreadTimeout and keepAliveTime for work, attributes that can not help me because it would always just he hike up base number. Otherwise maybe I could use that have a 0 min and max of the number of"basic". Or is the coreTimeOut for some other purpose?

    Well, the question of the "base" here is, is it possible I could create an executor who creates a number dependent threads according to the need (between x 1 and x 2 that I would define, for example, 1-10) but achieved x 2 at the start of the queue of tasks that come? It seems that the whole API is to assume I want either a fixed number of threads with the unbound queue or an unlimited number of threads with a queue bound. No way to have a set bounded discussions with min and max thread bodies, as well as the queue without terminals when the max thread count is reached? And can I get that with ScheduledThreadPoolExecutor enough, please?

    Or maybe I'm just the idea of simultaneity just bad here for somewhere? Is a kind of best practices of competitive access to pigs immediately the maximum number of threads, you think you might need later conflicts of resources? This is why there is only the size of the "fixed"? In all cases, ideas and help is appreciated.

    Well thanks for reading this far... and all answers of course!

    Yes, this particular scenario is difficult to achieve, and Yes, it seems that the standard behavior for me as well. You can sort of achieve this effect by using the feature "allow core of timeout threads. Basically, build you a pool of 'fixed' size with a queue boundless, but the flag 'allow the kernel thread timeout' set to true. This will essentially give you what you want, except that the 'min' size is 0. This seems to be a reasonable compromise to me because you do not need to write custom code (I had already implemented a scenario you describe, a queue of unlimited blocking which 'lies' to be full until the maximum pool size is reached).

    example:

            ThreadPoolExecutor executor = new ThreadPoolExecutor(maxPoolSize, maxPoolSize,
                                                                 60L, TimeUnit.SECONDS,
                                                                 new LinkedBlockingQueue(),
                                                                 threadFactory);
            executor.allowCoreThreadTimeOut(true);
            return executor;
    

Maybe you are looking for

  • HP 6500 Office Jet: is there a difference between an office jet 6500 and the other with a 'a' or 'more '.

    Hello, first post here. They gave me an office jet 6500 without the cd rom set up the disc. The Unit came with 4 new cartridges. Started trying to get to the top and running and first it attracts not paper. Checked this site and resolved that. (tempo

  • Want x 360: AccelerometerSt.exe/MSVCR110.dll question

    I recently bought a HP Envy x 360 (literally yesterday), because it was highly recommended by my help at best buy. Since yesterday afternoon, I had a problem with the display driver that comes will not be corrected. When I start my computer, it says

  • Wallpaper now shows especially white background

    original title; White desktop My wallpaper is over. Icons are there, when I right click and go to display properties / desktop, the only thing that has highlighted is color. I can't change the background, browse or position.  I changed the color to b

  • Java is not compatible

    I was a Java Update tonight and now a window op saying that Sun's Java Scheduler is incompatible with my Windows. What should I do? TIA!

  • How to get my printer online

    I have windows 8 and a epson Stylus 520.  I can't get it online.  I have 10 items be printed buy it says the printer is offline.