NullPointerException invoking on partitioned Cache processor

Hi all

We get a NullPointerException when I invoke an EntryProcessor on Service B, go in an EntryProcessor of Service on a partitioned cache.
The strange thing is that it works fine when there is only a single node, but fails when there are one or more nodes. I guess that makes some sense to that we used any serialization if there is only a single node.

Can anyone suggest what may be the cause of the problem of the stack trace?


16:20:19, 715 2012-05-02 16:20:19.715/3.272 Oracle coherence GE 3.7.1.3 < D6 > (thread = Proxy: FeedHandlerExtendTcpProxyService:TcpAcceptor members = 3): opened: canal (Id = 1136375156, Open = true, connection = 0x000001370E233D590A660314E7BFF3B54A24F82E01F98987CE652E1F3C5DD30E)
16:20:29, 913 2012-05-02 16:20:29.910/13.467 Oracle coherence GE 3.7.1.3 < D5 > (thread = Proxy: FeedHandlerExtendTcpProxyService:TcpAcceptorWorker:1 members = 3): an exception has occurred during the processing of an InvokeRequest for = Proxy Service: FeedHandlerExtendTcpProxyService:TcpAcceptor: (Wrapped: execution of requests that have failed for the maintenance TradesPartitionedCache member (Id = 3, Timestamp = 16:20:18.135 2012-05-02, address = 10.102.3.20:8088, MachineId = 63987, location = site:, machine: LONW00067144, process: 9356, role = cache)) laptop (com.tangosol.util.WrapperException) : (Wrapped: execution of requests that have failed for the service PositionPartitionedCache on Member(Id=2 Timestamp = 16:13:27.876 2012-05-02, adresse = 10.102.3.20:8090, MachineId = 63987, emplacement = site:, machine: LONW00067144, processus: 12892, rôle = cache)) null
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
to com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ InvokeRequest.run (PartitionedCache.CDB:1)
to com.tangosol.coherence.component.util.DaemonPool$ WrapperTask.run (DaemonPool.CDB:1)
to com.tangosol.coherence.component.util.DaemonPool$ WrapperTask.run (DaemonPool.CDB:32)
to com.tangosol.coherence.component.util.DaemonPool$ Daemon.onNotify (DaemonPool.CDB:63)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run (unknown Source)
Caused by: Portable (com.tangosol.util.WrapperException): (Wrapped: execution of requests that have failed for the maintenance PositionPartitionedCache member (Id = 2, Timestamp is 16:13:27.876 2012-05-02, address = 10.102.3.20:8090, MachineId = 63987, location = site:, machine: LONW00067144, process: 12892, role = cache)) null
at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
... 2 more
Caused by: Portable (java.lang.NullPointerException)
at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
... 12 more


Thank you

Published by: bish on May 2, 2012 17:42

Hello

The cache called CacheConstants.PCACHE exist - it's got some code somewhere called Cachefactory.getCache (CacheConstants.PCACHE)?
Call ctx.getBackingMapContext (CacheConstants.PCACHE) will not create the support card, so if the cache does not yet exist back then the support card will be null.

JK

Tags: Fusion Middleware

Similar Questions

  • What happens to lock expired element partitioned cache?

    With a partitioned cache, what happens if an object with a lock on it expires?

    In other words, if I put it in with expiration, there's something blocking and it expires while the lock is present, what happens?

    Hi mesocyclone.

    API lock/unlock is completely orthogonal to the data-related APIs (get, put, invoke, etc.). Presence of the absence of data has no effect on the lock.

    Kind regards
    Gene

  • Concurrency with replicated Cache entry processors

    Hello

    In the documentation says that entered the replicated cache processors are run on the original node.

    How concurrency is managed in this situation?
    What happens if two nodes or more are invited to run something on an entry at the same time?
    What happens if the node initiating enforcement is a storage node disabled?

    Thank you!

    Jonathan.Knight wrote:
    In a distributed cache entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes, so I think that one of the questions was what happens in this scenario. I presume that the only EP execues on one of the nodes - it would be unwise to run on all nodes - but which should I use? Is there still a concept of owner for a hiding replicated or it is random.

    At this point, I would have coded a quick experiment to prove what is happening, but unfortunately I'm a little busy right now.

    JK

    Hi Jonathan,.

    in the replicated cache, there are always a notion of possession of an entry, in terms of consistency, it is called a lease. It is still owned by the last node to perform a change successful thereon, where change could be has put/remove, but it can also be a lock operation. Granularity of lease is through entrance.

    Practically the lock operation in the code pasted Dimitri serves two purposes. First, it ensures that no other nodes cannot lock, second that it brings the lease to the lock node, so it can properly run the processor entry locally on the entry.

    Best regards

    Robert

  • Problem when executing the processors in the nodes with disabled local storage

    Hello world

    We test the coherence JSR107 (a.k.a. JCache) implementation of Yannis (https://github.com/yannis666/Coherence-JSR-107) and we found the following issue when you work with a local storage where distributed cache node is disabled (using - Dtangosol.coherence.distributed.localstorage = false). Yannis implementation uses several EntryProcessors for the execution of operations get() or put () in the cache. For example, the following code is used to update a value in the cache:

    / public class PutProcessor implements InvocableMap.EntryProcessor {}

    @Override
    public Object process (input InvocableMap.Entry) {}
    BEntry entry = (BinaryEntry) BinaryEntry;
    return bEntry.isPresent ()? bEntry.getBinaryValue (): null;
    }

    }

    And put operatino is taken from the following code:

    namedCache.invoke (key, CPU)

    where processor argument contains a reference to the PutProcessor object.


    When we call JSR107 performs surgery put in the cache and code above we get following exception:

    java.lang.NullPointerException
    to com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ Storage.invoke (PartitionedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    to com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ InvokeRequest.run (PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:662)

    Of course we have previously started another node in the cluster with local storage enabled.

    It seems that, as no storage card is present at the node, a NullPointerException is thrown when you try to insert the entry.

    Could someone give advice on this problem and how to overcome? This implementation is based on the processor is not compatible with the knots of localstorage cache disabled?

    Thank you in advance,

    Daniel C.

    Hello

    It sounds like COH-7180 - demand deserialization failure results in a cryptic NullPointerException, that has been fixed in one of the 3.7.1 patches. The real problem is that your entryprocessor cannot be deserialized.

  • Strange read after work entry processor operation of

    Hello.
    We use the listener of combination cache - processor of entry to certain actions when the data comes from coherence. We use Version consistency Oracle 3.5.3/465.

    Just after the entry processor has defined the new value for the entry, the new 'get' operation is called for the cache and jdbc hit is for this key.

    Here's the entry processor:
    public Object process(Entry entry) {        
            if (!entry.isPresent()) {
                // No entities exist for this CoreMatchingString - creating new Matching unit
                MatchingUnit newUnit = new MatchingUnit(newTrade);
                entry.setValue(newUnit, true);
                
                return null;
            }
    
            ((MatchingUnit)entry.getValue()).processMatching(newTrade);
    
            return null;
        }
    Very interesting, that if I use entry.setValue (value) without second parameter - I get the db hit right on the setValue method. According to the docs, setValue() with a parameter returns the previous value and its logic, that the cache hit (and therefore the PB hit) is right on the tray. But I use the overloaded version Sub setValue (java.lang.Object evaluating, boolean fSynthetic), which is supposed to be light and should not seek a previous version of the object. But this is done in any case! Not on SetValue himself, but just after the process() method called.

    It's strange, that consistency is trying to retrieve the previous value in the present case, that there was no! The cache.invoke (matchingStr, CCPEntryProcessor (ccp)) new is called on the record not exist and it is created right on the invocation. Maybe it's the bug or the place for optimization.

    Thank you

    BITEC wrote:
    Thank you, Robert, for this detailed response.

    Still not clear to me why the synthetic inserts are questionable. There are many cases, when the client simply updates/inserts recording (using setValue()) and don't don't don't need to receive the previous value. If he needs to, he will launch the method:

    Hi Anton,.

    It is questionable because the purpose of the indicator fSynthetic is NOT that you can optimize a store operation hidden far. Synthetic event means that it is not a real change in the whole of data triggered by the user, it's something that consistency has made in terms of support / cover for its own reasons and decisions to be able to provide high availability for data, and it only changes of subset of this particular node of data consistency but has no meaning associated with the full data set that actually exists. These reasons are movement partition and expulsion of cache (or possibly other reasons why consistency would like to change the content of the carrier sheet without telling the user that nothing has changed).

    If you set the indicator, you attempt a change of data of masquerade as an event whose consistency has decided to trigger. It is questionable. Also, events map synthetic support may not always lead to the expedition to cache events (for partition rebalance they certainly not). This optimization can also be extended to cover synthetic events.

    java.lang.Object setValue(java.lang.Object oValue)
    

    and receive the previous value. If it does not, it calls:

    void setValue(java.lang.Object oValue, boolean fSynthetic)
    

    and does not receive the previous value as method is marked as void. Thus, he cannot get the previous value in any case using this API, with the exception of the call live db manual.

    Yes, because the consistency is not interested in the old value in the case of a synthetic event. Synthesis methods exist so that some entry can be changed to coherence (usually by coherence itself) so that it indicates a synthetic event so that listeners are unaware.

    Some valid uses for this type of feature to setValue called by user code could be compaction value set caching and replacing the value that stores the support plan by the representation in compact, which does not mean a change in the sense of the actual cached value, only the changes to the representation. Of course if the setValue method (2) is not actually honoring the synthetic indicator, then such a feature will still commit all costs of a call normal setValue (1).

    But the previous value is in any case read by coherence itself, just after process() and the customer anyway is not get it!

    But all the headphones on the cache may have because of the cache of the semantic reasons.

    In this case, I consider that the bug, the cause of the client, which uses this API does not cache hit to take place (no return value for the setValue() method not overloaded), but it takes and leads to additional problems, as a result of reading through mechanizm.

    I'd not considered to be a bug, it's probably the case documenting a possible optimization too early, when it ultimately does not get put in place. I certainly wouldn't try to abuse it to set a value without triggering a db extraction, as again, the intention of the synthetic indicator is related not only to the cache loader functionality, but also to events and marking a change indicates a change of actual data or a data consistency management action.

    Now I understand why consistency is unclear, if this is inserted or updated, thanks for details value.

    Anton.

    * Edition: I thought this problem from the point of the oracle user, but perhaps this additional success is necessary for event producers, who must take the events that contain old/new values. In this case, this seems to be the correct behavior... Seems I need some another workaround to avoid the blow of db. The best solution is the empty load() for dumps...

    You can try to find a solution, but it's a ugly Pandora's box, because of the scenario when multiple threads try to load the same entry, and some of them try to load it with the reason.

    You can try putting a thread local data to indicate that you want really load that special touch. The problem is that depending on the configuration and conditions of race as your cache loader may not be called to clean up local thread as another thread can only be invoked now, and in this case its return value will be returned to all other threads, too, then you can end up with a polluted thread.

    Best regards

    Robert

    Published by: robvarga on October 15, 2010 16:25

  • information processing in several caches in the same node

    We use a partitioned cache to load the data (several types of data) into multiple named caches. In a single partition, we plan to have all the related data, and we have obtained using this key binding.

    Now, I want to do the treatment with this node and a reconciliation of data from various sources. We tried the entry processor, but we want to consolidate all the data of multiple caches named in the node. Form a very naïve, I think to each cache named tabular and I'm looking for ways to have a processor that will do some processing on the associated data.

    I see that we can use a combination of remained object, processors of service and entry Invocation, but I am unable to implement successfully.

    Can you please point me to any implementation of reference where I can perform some processing on the data node without data transfer to the customer.

    Also to reduce any reference implementation of the card (on the server side) in coherence would be useful.

    Concerning
    Gaudin

    Gaudin

    The same concept applies to batch processing. You can run an entry processor who starts a background thread for batch processing. The only drawback is that you must manage failover explicitly to restart the thread on a new Member if Member fails. This can be done by using a lease in the package of common incubator common.lease.

    Paul

  • Satellite C850D-11 q - made processors supported

    I was just curious to know what support processors can use this model.

    Hello

    Satellite C850D-11 q was equipped with the AMD Chipset A68M.
    Therefore ONLY (probably only the a mobile processors AMD Comal) AMD processors are supported.

    But as you probably do not knew, upgrading the CPU is not supported by one of the manufacturers of mobile phones.
    This means mainly using other processors has not been tested, and also the BIOS that is not configured for other processors, could cause problems recognizing the new processor.

    However, I found some processors that have been equipped in other C850D / C855D series:

    Comal A10 processor AMD:
    A10 - 4600M (Max/Base:3.2GHz/2.3GHz) (4 MB L2 Cache)
    Processor AMD A8 Comal:
    A8 - 4500M (Max/Base:3.0GHz/2.1GHz) (4 MB L2 Cache)
    Processor AMD A6 Comal:
    A6 - 4400M (Max/Base:3.2GHz/2.7GHz) (1 M L2 cache)
    Comal A4 processor AMD:
    A4 - 4300M (Max / Base: TBD) (1 M L2 cache)

    This info may be useful for you.

  • Upgrading processor on Satellite L500-1QE

    I have a Satellite L500-1QE and I want to know if I can change my processor to something again, because now I use Dual-Core 2.2 Ghz T4400.

    (I don't know much about the updates that is why I ask)

    Thanks in advance.

    Hello

    You must ensure that new CPU would be managed by the chipset built in Notepad.
    I think that your laptop supports the Chipset Mobile Intel GM45 Express.

    I checked the Intel page for compatible processors and it seems that these processors are supported by the chipset:
    Intel® Core 2 Extreme Processor X 9100 (3.06 GHz, front side bus at 1066 MHz, 6 MB Cache)
    Intel® Core 2 Duo processor T9400 (2.53 GHz, front side bus at 1066 MHz, 6 MB Cache)
    Intel® Core 2 Duo T9600 processor (6 M Cache, 2.80 GHz, 1066 MHz FSB)
    Intel® Core 2 Duo P9500 processor (6 M Cache, 2.53 GHz, 1066 MHz FSB)
    Intel® Core 2 Duo P8600 processor (3 m Cache, 2.40 GHz, 1066 MHz FSB)
    Intel® Core 2 Duo P8400 processor (3 m Cache, 2.26 GHz, 1066 MHz FSB)
    Processor Intel® Celeron® 575 (2.00 GHz, 667 MHz FSB, 1 MB Cache)
    Processor Intel® Celeron® 585 (2.16 GHz, 667 MHz FSB, 1 MB Cache)
    Intel® Core 2 Extreme QX9300 processor (Cache 12 MB, 2.53 GHz, 1066 MHz FSB)
    Intel® Core 2 Duo T9800 CPU (2.93 GHz, front side bus at 1066 MHz, 6 MB Cache)
    Intel processor Core 2 Duo P8700 (3 m Cache, 2.53 GHz, 1066 MHz FSB)
    Intel® Core 2 Quad processor Q9100 (2.26 GHz, 1066 MHz FSB, 12 MB Cache)
    Intel® Celeron® Processor T1700 (1.83 GHz, 667 MHz FSB, 1 MB Cache)
    Intel® Core 2 Duo processor T9550 (2.66 GHz, front side bus at 1066 MHz, 6 MB Cache)
    Intel® Celeron® Processor T3100 (1.90 GHz, 800 MHz FSB, 1 MB Cache)
    Intel® Core 2 Duo processor P9600 (6 M Cache, 2.66 GHz, bus front-end to 1066 MHz)
    Intel® Celeron® Processor T1600 (1.66 GHz, front side bus at 667 MHz, 1 MB Cache)
    Intel® Core 2 Duo Processor T9900 (3.06 GHz, front side bus at 1066 MHz, 6 MB Cache)
    Intel® Core 2 Duo processor P8800 (3 m Cache, 2.66 GHz, bus front-end to 1066 MHz)
    Processor Intel® Celeron® 900 (2.20 GHz, 800 MHz FSB, 1 MB Cache)
    Intel® Celeron® Processor T3500 (2.10 GHz, 800 MHz FSB, 1 MB Cache)
    Intel® Core 2 Duo processor T6670 (Cache 2 MB, 2.20 GHz, 800 MHz FSB)
    Intel® Core 2 Duo processor P7570 (3 m Cache, 2.26 GHz, 1066 MHz FSB)
    Intel® Core 2 Duo P9700 processor (6 M Cache, 2.80 GHz, 1066 MHz FSB)
    Intel® Celeron® Processor T3300 (2.00 GHz, 800 MHz FSB, 1 MB Cache)
    Processor Intel® Celeron® 925 (2.30 GHz, 800 MHz FSB, 1 MB Cache)

    But there is aquestion about the BIOS still support I m not very well if the BIOS would be compatible with all of these upgrade CPU CPU is always a tricky thing.

  • Program for caching disk ssd Y580

    Currently I have my OS installed on my 32 GB SSD - drive on the lenovo y580. Recently, I discovered that I shouldn't have installed here, but the HARD drive disk and use the SSD to cache. What program should I use for caching because Y580 is not supported RAID. And when my OS on the SSD, should I just re - install on the HARD disk drive. But I have some personal data like pictures, movies, music, and documents on the HARD drive. If I install windows then I lose them. They are important to me!

    Hello

    Here's the solution provided on article

    You will need to erase the SSD partitions, so you will need to have your operating system installed on the HARD drive first.

    To resolve this issue, first delete the primary partition on the mSATA SSD and then install the ExpressCache software. The ExpressCache Setup will automatically create the necessary partition cached on the mSATA SSD.

    1. Open the Start Menu, type DiskMgmt.msc and press ENTER to start the Disk Manager.

    2. At the bottom of the disk management window, identify the mSATA SSD device. The mSATA SSD 16 GB is 14,91 GB. The mSATA SSD 32 GB is 29,82 GB.

    3. The mSATA SSD can contain a 'Data2' score with a D: or E: drive letter. If so, check for any saved files on the device. If the files exist, you can move them to the C:\ drive.

    4. Delete the primary Partition on the mSATA SSD by right-click on it and choose 'remove the Volume '.... "in the menu that appears. Click 'Yes' to confirm the action.

    5. There are at least 29GB of unallocated on the mSATA SSD space.

    6. Download ExpressCache from the following link that corresponds to the installation of Windows 7 (32-bit or 64-bit):


    1. Install ExpressCache by double-clicking the installation file downloaded in step 6 and follow the on-screen instructions. Reboot as prompted.

    Best regards

    Solid Cruver

  • Upgrading the processor in a G62-229WM

    Hello

    I had the wrong streaming video high definition on my HP G62-229WM.  From my research so far, it seems that the cause of the problem is the processor and the solution is to upgrade.

    My first question is whether all the processors listed in HP Maintenance and Service of Guide for my computer model are compatible with my laptop.  I see that some processors are specified for model 1.1 or 1.2 only, but other than that, can I buy and install one of the listed processors?

    My second question is how to determine which CPU to buy.  When should I start adding RAM to keep up with the processor?  I'm confused as to the interaction of these two parties.  I would not buy a more expensive processor, when I won't be able to get the best out there in any case.  I am pleased to also add RAM, but I have no idea how to do to decide what to do.

    Currently, my system has:

    Windows 7, 64-bit

    AMD V120 Single Core 2.24 ghz processor

    2 GB of Ram

    CPU Socket S1G4

    All parts are original.  We want to use this computer exclusively as an entertainment system, streaming video is so that it is the only task.

    I appreciate any help at all.  Thank you!

    Even the processor to the cheese you have is much faster than a Pentium 4 3 ghz. A good cell phone these days is as fast as a P4 3 ghz. Clock speed isn't everything. A choice between 4 gigabytes of RAM and a dual core with quadruple the L2 cache processor is difficult. The specifications of your laptop actually said he had 3 gigabytes of RAM... Why your have only 2? Can you check the memory physically to ensure that both modules are seated? If I had to choose, I guess I would take the CPU if I could make only memory or the CPU. Before that, although I could check the system monitor and see if the system is having to use swap memory. If this is the case, then I would upgrade the RAM and not the processor. Nothing slows a system like having to use virtual memory (swap). I think that a base load for Windows 7 requires perhaps 1.25 GB so with 2 GB, you have not a lot of margin. I would also do a msconfig and get rid of the programs to autostart as much as possible.

  • The upgrade of the processor

    Hello

    Can I upgrade the CPU in my laptop HP G62-550ee of core i3 330M for a newer version, like the core i7 4600 M?

    PC laptop Compaq Presario CQ62 laptop HP G62 - Maintenance and Service of Guide, only the following processors will work in your computer;

    Processor Intel Core i7 - 620 M (4 MB cache, 2.66 GHz, turbo SC up to 3.33 GHz frequency)
    Processor Intel Core i5 - 540 M (3 m cache, 2.53 GHz, turbo up to 3.06 GHz SC)
    Processor Intel Core i5 - 520 M (3 m cache, 2.4 GHz, turbo SC up to 2.93 GHz)
    Processor Intel Core i5 - 430 M (3 m cache, 2.26 GHz, turbo up to 2.53 GHz SC)
    Processor Intel Core i3 - 350 M (2.26 GHz, 3 m cache)
    Processor Intel Core i3 - 330 M (2.13 GHz, 3 m cache)

    If you have any other questions, feel free to ask.

    Please click the 'Thumbs Up' white LAURELS to show your appreciation

  • HP Compaq Presario CQ60-404SA is laptop possible to upgrade processor for AMD 64 X 2 TL - 60

    HP Compaq Presario CQ60-404SA is laptop possible to upgrade processor for AMD 64 X 2 TL - 60 will perform like now Vista premium 64.

    What CPU is currently installed?

    Download and run CPU-z as the quickest way to say.

    The following CPU is compatible with your system notebook\s card. The processor AMD X 2 QL-60 and QL - 62 are your best compatible options.

    AMD Turion Dual-Core
    (1 MB L2 cache)
    Processor 20 GHz ■ RM-70 X X
    Processor 2.1 GHz ■ RM-72 X X
    Athlon™ X 2 Dual-Core AMD
    (1 MB L2 cache)
    Processor 1.9 GHz ■ QL-60 X X
    Processor 2.0 GHz ■ QL-62 X X
    Sempron™ Single-Core AMD
    (512 KB L2 cache)
    Processor 2.1 GHz ■ TR - 42 X X
    ■ If - 40 2.0 GHz processor X X

  • Vostro 200 Mini-Tower desktop processor speed limit!

    1. I would like to increase its speed of 1.6 Ghz at least 3.0 Ghz processor. Is it possible & how I can do?

    2. while I'm there I have to replace or change anything else, the size of the RAM for example?

    Thank you

    17/04/2011

    Marihuas

    The Core 2 Duo E8600 3.33 GHz 6 M L2 Cache processor should work fine, in 200 Vostro with the Version of the BIOS, already mentioned in my post.

    Considers that for Quad-Core Core 2 Extreme QX9650 processor, 3 GHz, 12 MB L2 Cache, it will not work with the motherboard installed in your Vostro 200, for a compatible Core 2 Quad, a card mother G33M03 and a 350w PSU is necessary, I believe that your motherboard is the version with a power supply 300w G33M02.

    Bev.

  • Integrate Hibernate L2 cache with consistency in the composite key of reference

    Hi all:
    I have a problem when I activate the Hibernate l2 cache integrated with consistency. When the entity class using this string or unique integer primary key, the L2 cache has worked well. But when I use the entity with composite key class, I had the problem.
    I wrote a script to test junit to test the session.get method, when I run the test cases for the first time, the miss of the entity to load the object from the cache, Hibernate will trigger a DB query so it box as an object, then put it in the cache. But when I run the test case, again, I found the Hibernate trigger a query once again, which was strange... (it should not have happened, I should got the object from the cache), then, I used the console cache coherence to list the data in the cache and finding himself under the information:
    ###########################################
    (Hi) map: list
    model. SiteUserInfo #model. SiteUserID@323819 = Item {version = null, freshTimestamp = 1333943690484
    model. SiteUserInfo #model. SiteUserID@323819 = Item {version = null, freshTimestamp = 1333943600419
    ###########################################
    There are two keys with the same value cached... (I put already implemented method hashcode and equals to SiteUserID object). someone there has the same problem? Or L2 Hibernate with composite key can integrate with consistency?

    Rip off code SiteUserID
    #####################################
    @Override
    public boolean equals (Object obj) {}
    If (obj == this) {}
    Returns true;
    }
    If (!) () obj instanceof SiteUserID)) {}
    Returns false;
    }
    SiteUserID id = (SiteUserID) obj;
    return new EqualsBuilder () .append (sid, id.getSid ()) .append (name, id.getName ()) .isEquals ();
    }

    @Override
    public int hashCode() {}
    return new HashCodeBuilder () .append (sid) .append (name) .toHashCode ();
    }
    ###################################

    Hi John,.

    I think there may be another problem.

    Please reference:
    http://blackbeanbag.NET/WP/2010/06/06/coherence-key-HOWTO/
    For entry cache, partitioned cache is placed in backup card in binary form.
    The main objects of the same data must always be serialized in binary form even.

    Unfortunately, use Hibernate CacheKey is as key cache L2.
    And CacheKey is somewhat complex, some related attributes use TreeMap as their type.
    "Java gives no guarantee on the order in a hash table and serialization is dependent on the command."
    (reference http://stackoverflow.com/questions/9337126/how-can-oracle-coherence-get-fail-with-retrieved-key-object)

    So... you may need to modify the source code of some Hibernate, ex: replace the FastHashMap in TreeMap...
    I tried, it looks like work. I'll send you have modified source codes.

    Best regards
    Leon

  • cache invalid result

    Hello

    Version Linux Oracle 11.2.0.1

    I understand the invalidation of the cache (RC) thatt result is at the level of the table.

    I did a simple test:
    create table customer (custno number, custname varchar2(30));
    
    Table created.
    
    
    insert into customer (custno,custname) values (1,'Customer_1');
    
    insert INTO CUSTOMER (custno,custname) values (2,'Customer_X');
    
    select * from customer;
    
    
        CUSTNO CUSTNAME
    
    ---------- ------------------------------
    
             1 Customer_1
    
             2 Customer_X
    
    commit;
    
    Commit complete.
    Now I invoke the result cache
    select /*+ RESULT_CACHE */ * FROM customer where custno=1;
     
    
    Execution Plan
    
    ----------------------------------------------------------
    
    Plan hash value: 2844954298
    
      
    
    -------------------------------------------------------------------------------------------------
    
    | Id  | Operation          | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    
    -------------------------------------------------------------------------------------------------
    
    |   0 | SELECT STATEMENT   |                            |     1 |    30 |     3   (0)| 00:00:01 |
    
    |   1 |  RESULT CACHE      | ggb2vz6jcvcn5ajzqh406j3n85 |       |       |            |          |
    
    |*  2 |   TABLE ACCESS FULL| CUSTOMER                   |     1 |    30 |     3   (0)| 00:00:01 |
    
    -------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    
    ---------------------------------------------------
    
      
    
       2 - filter("CUSTNO"=1)
    
      
    
    Result Cache Information (identified by operation id):
    
    ------------------------------------------------------
    
      
    
       1 - column-count=2; dependencies=(SCRATCHPAD.CUSTOMER); name="select /*+ RESULT_CACHE */ * FROM customer where custno=1"
    Invoke the RC for the second query line
     
    select /*+ RESULT_CACHE */ * FROM customer where custno=2;
     
    
    Execution Plan
    
    ----------------------------------------------------------
    
    Plan hash value: 2844954298
    
      
    
    -------------------------------------------------------------------------------------------------
    
    | Id  | Operation          | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    
    -------------------------------------------------------------------------------------------------
    
    |   0 | SELECT STATEMENT   |                            |     1 |    30 |     3   (0)| 00:00:01 |
    
    |   1 |  RESULT CACHE      | fc8t6svvz6whh0gc8vcaxrh668 |       |       |            |          |
    
    |*  2 |   TABLE ACCESS FULL| CUSTOMER                   |     1 |    30 |     3   (0)| 00:00:01 |
    
    -------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    
    ---------------------------------------------------
    
      
    
       2 - filter("CUSTNO"=2)
    
      
    
    Result Cache Information (identified by operation id):
    
    ------------------------------------------------------
    
      
    
       1 - column-count=2; dependencies=(SCRATCHPAD.CUSTOMER); name="select /*+ RESULT_CACHE */ * FROM customer where custno=2"
    OK, they are stored in separate result cache

    Now update the second row of this table in another session
    update customer set custname ='Customer_2' where custno=2;
    
    1 row updated.
    
    commit;
    
    Commit complete.
    Now ask custno = 2 from the first session
     
    select /*+ RESULT_CACHE */ * FROM customer where custno=2;
     
    
    Execution Plan
    
    ----------------------------------------------------------
    
    Plan hash value: 2844954298
    
      
    
    -------------------------------------------------------------------------------------------------
    
    | Id  | Operation          | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    
    -------------------------------------------------------------------------------------------------
    
    |   0 | SELECT STATEMENT   |                            |     1 |    30 |     3   (0)| 00:00:01 |
    
    |   1 |  RESULT CACHE      | fc8t6svvz6whh0gc8vcaxrh668 |       |       |            |          |
    
    |*  2 |   TABLE ACCESS FULL| CUSTOMER                   |     1 |    30 |     3   (0)| 00:00:01 |
    
    -------------------------------------------------------------------------------------------------
    
      
    
    Predicate Information (identified by operation id):
    
    ---------------------------------------------------
    
      
    
       2 - filter("CUSTNO"=2)
    
      
    
    Result Cache Information (identified by operation id):
    
    ------------------------------------------------------
    
      
    
       1 - column-count=2; dependencies=(SCRATCHPAD.CUSTOMER); name="select /*+ RESULT_CACHE */ * FROM customer where custno=2"
    The same reference of result cache is still there. While this average that hiding is NOT overturned despite the updated row or I am doing something wrong here?

    Thank you

    Published by: 902986 on February 12, 2012 13:26

    Cache the result id is a hash value for the query that Oracle later lets you know if a query generates a result set that is already in the cache.

    When you updated the table the cached result has been marked invalid. Then, you run the same query for record 2 and Oracle has created a hash value for the query and the hash value is identical to the first, because the query is the same; in other words, the query itself axe to the same value. But the contents of the cache of results for this query have changed and replace the old content invalid.

    If you query the result the cache value you will get the new value that not since the old result set for your second query is not there anymore.

Maybe you are looking for