C++ Cache manages best practices

Hello

Extend the initial connection to the coherence via C++ client proxy is an expensive operation. To this end could someone offer me some clarity on the best practices in the following situations.

1. currently I call CacheFactory::getCache (cache_name); and save this handle, I then use the handle for the duration of the application for gets later. I use a range of different caches and I keep all handles 'alive '. By doing this, I found only the cost of the socket connection once.

2. given situation 1. What happens if multiple threads call 'get' on a mixer? Are queue operation? It is thread-safe?

3. situation 2. Will there be a performance gain to have a cache to manage per-thread?


Thank you
Rich

Hi rich,

The CacheFactory puts cached internally NamedCache returned references, multiple calls to CacheFactory::getCache ("name") will return the same reference every time. When the application is done using the cache it can ask the factory to drop the reference and invalidating local stubs by calling CacheFactory::releaseCache, passing in the reference cache. The NamedCaches returned are also thread-safe, so that you can share them safely between threads in your application. Finally, given that the factory will return the same reference every time, you have not really a way to distribute the different references in different threads, if you really wanted to do that you would need to instantiate DefaultConfigurableCacheFactories separate for each thread, rather then making them share the CacheFactory singleton. Note these feature exist in the same form in our Java and .NET clients.

Thank you

Mark
The Oracle coherence

Tags: Fusion Middleware

Similar Questions

  • Update DB Manager best practices?

    It is advisable to give VUM it's own DB instance separated from the VC DB instance?   The best practice is documented somewhere?

    Thanks in advance,

    -geob

    That's probably what you are looking for: VMware Update Manager Performance and best practices

  • Request for advice: generally speaking, what is the best practice for managing a paid and a free application?

    Hi all

    I recently finished my first app of cascades, and now I want to inspire of having a more feature rich application that I can then sell for a reasonable price. However, my question is how to manage the code base for both applications. Any have any "best practices", I would like to know your opinion.

    You use a revision control system? This should be a prerequisite...

    How the different versions of the application will be?

    Generally if you have two versions that differ only in terms of having a handful of features disabled in the free version, you must use exactly the same code base. You could even just it for packaging (build command) was the only difference, for example by adding an environment variable in one of them that would be checked at startup to turn paid options.

  • Best practices for the Manager of the Ucs to the smooth running of our environment

    Hi team

    We are remaining with data center with Cisco Ucs blades. I want the best practices guide Ucs Manager Manager of Ucs check all things configured correctly in accordance with the recommendation of Cisco and standard to the smooth running of the environment.
    A certain provide suggestions. Thank you

    Hey Mohan,.

    Take a look at the following links. They should provide an overview of the information you are looking for:

    http://www.Cisco.com/c/en/us/products/collateral/servers-unified-computi...

    http://www.Cisco.com/c/en/us/support/servers-unified-computing/UCS-manag...

    HTH,

    Wes

  • Best practices for managing exceptions and success messages.

    Hey people,

    These days I've been shooting packages to clean my application. And question came to my mind, ' should I treat my exceptions the right way?


    So I want to ask you met guys, what is the best practice for this? (I want to learn it until it's too late )

    Currently I have a function that returns "OK" if all goes well.


    return('OK');  

    Can I manage my exceptions like this

      EXCEPTION
        WHEN OTHERS THEN
          ROLLBACK;
          RETURN (SQLERRM);
    
    

    At THE SUMMIT, I have a process that calls the function and then checks if the function returned "OK".

         IF cRet not LIKE 'OK%' THEN
          RAISE_APPLICATION_ERROR(-20000,cRet);
         END IF;
    
    

    And in 'process Error Message' I put "#SQLERRM_TEXT #" so that I can see what error occurred.

    Question aside, how do you manage your messages of success?

    Currently in 'process success Message' put something along the lines "Action completed successfully". What to do about all the processes.

    Do you want to do differently?

    I really want to hear what you have to say since I'm tired of feeling like this is a terrible way to manage these things.

    Hi Para,

    Para wrote:

    I don't know of situations where my service throw exceptions like no_data_found.

    and I need to know that the process is not so I can get to see my # #SQLERRM_TEXT.

    I got this by increasing the error in the application (which I think is a bad way to go) if the return is anything other than 'OK '. I get my application error in the process of the apex, and not in my service.
    And I want to show the inline error in the notification. (Which I am currently with my approach).

    You can use APEX_ERROR. ADD_ERROR in your PL/SQL process to throw exceptions in Oracle APEX.

    Reference: Re: Re: error in the processing of the page management

    Kind regards

    Kiran

  • I'm looking for help to share best practices to upgrade the Site Recovery Manager (SRM), if someone can summarize the preparatory tasks?

    I'm looking for help to share best practices to upgrade the Site Recovery Manager (SRM), if someone can summarize the preparatory tasks?

    Hello

    Please check the content below, you may find useful.

    Please refer to the URL: Documentation VMware Site Recovery Manager for more detailed instructions.

    Important

    Check that there is no cleanup operation pending on recovery plans and there is no problem of configuration for the virtual machines that protects the Site Recovery Manager.

    1 all the recovery plans are in ready state.

    2 the protection status of all protection groups is OK.

    3 the status of the protection of all the individual virtual machines in the protection groups is OK.

    4 the recovery of all groups of protection status is ready.

    5. If you have configured the advanced settings in the existing installation, note settings you configured before the upgrade.

    6 the vCenter local and remote server instances must be running when you upgrade the Site Recovery Manager.

    7 upgrade all components Server vCenter Site Recovery Manager on a site until you upgrade vCenter Server and Site Recovery Manager on the other site.

    8 download the setup of Site Recovery Manager file in a folder on the machines to be upgraded the Site Recovery Manager.

    9 make sure no other facilities-\no updates windows restarts done shoud

    Procedure:

    1. connect to the machine on the protected site on which you have installed the Site Recovery Manager.

    2. backup the database of Site Recovery Manager by using the tools that offers the database software.

    3. (optional) If you upgrade of Site Recovery Manager 5.0.x, create a 64-bit DSN.

    4 upgrade the instance of vCenter Site Recovery Manager server that connects to vCenter Server 5.5.

    If you upgrade a vCenter Server and Site Recovery Manager 4.1.x, you upgrade the instances of vCenter Server and Site Recovery Manager server in the correct sequence until you can upgrade to Site Recovery Manager 5.5.

    a upgrade vCenter Server 4.1.x to 5.0.x server.

    b Update Site Recovery Manager of 4.1.x to 5.0.x.

    c upgrade server vCenter Server 5.0.x to 5.5.

    Please let me know if it helped you or not.

    Thank you.

  • What is the best practice for the double management interfaces?

    Hello community!

    I'm upgrading to a few host ESX to ESXi 4.1U1 4.0 in the coming weeks. My question is about how to configure the management networks. Obviously in ESX 4.0 Classic I have a Service Console port (on vSwitch0) group and a group of ports VMkernel (also on vSwitch0) which provides my host with SC and vmotion capabilities, as we all know. Note: my vSwitch0 has two vmnic attached to it, is pending and is active. That's just how we have our double installation of switches, so it must be active / standby.

    I got to thinking (book the great from HA and DRS deepdive Duncap Epping and Frank Denneman), that I should consider carefully when my network mangement I improve these hosts to ESXi 4.1 - which of course done away with the Service Console and use the vmkernel instead.

    The question is, in which best practices and account with my setup: I have two vmkernel ports? If so, how should I configure each for traffic management and vmotion vmkernel?

    I think it will be a good discussion to have.

    Thank you all,

    Matt

    The NIC vSwitch0 value active/active, the vswif (and future vmkernel management) team of NIC Active vmnic6 and vmnic1 ensures and leaves the vMotion vmkernel NIC team as is.

    This will allow you to use two physical network interface cards at the same time while having a failover plan and keep your physically separate management and vMotion traffic.

    In the end:

    vSwitch: vmnic1 vmnic6 active, active.

    VMK (Mgt): active eve of vmnic1 vmnic6.

    VMK (vMotion): active standby, vmnic6 vmnic1.

  • Separate management / VMotion Best Practice?

    We're heading to 4.0 ESX ESXi 4.1.  Our servers have 4 physical Gigabit NIC.

    On ESX 4.0, we lack 2 vSwitches:

    vSwitch0

    Service Console - Active vmnic0 - vmnic3 watch

    VMkernel - Active vmnic3 - vmnic0 eve

    (Unique network interface cards / IPs by function)

    vSwitch1

    Port VM - vmnic1 and vmnic2 active groups

    (Several VLANS to resources shared)

    With the changes in ESXi, is recommended to separate the management of VMotion as we did with ESX?  Notice that we use the same subnet for these two functions.

    Personally, I would prefer combining Management and VMotion.  VMotion will not only benefit an additional NIC usage, especially with the multiple simultaneous VMotions?  At the same time, it seems not that management traffic would be impeded to the point of needing to separation, as we use the same subnet.  In addition, security should not be a problem, since the new, we use the same subnet for management and VMotion.

    Your configuration is consistent with "best practices". I prefer separate management taffic VMkernel myself, even if it will cost me some performances of vMotion.

    ---

    MCITP: SA + WILL, VMware vExpert, VCP 3/4

    http://blog.vadmin.ru

  • Best practices for managing strategies of path

    Hello

    I get conflicting advice on best practices for managed paths.

    We are on version 4.0 of ESXi connection to a HP EVA8000. Best practices guide HP recommends setting the strategy of railways handle on Round Robin.

    This seems to give two active paths to the optimized controller. See: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf

    We used certain consultants and they say that the best practices of Vmware for this solution is to use the MRU policy which translates a single path to the optimized controller.

    So, any idea what good practice is best practice? Does make a difference?

    TIA

    Rob.

    Always go with the recommendation of the storage provider.  VMware recommendation is based on the characteristics of the generic array (controller, capable ALUA failover methods, etc.).  The storage provider's recommendation is based on their performance and compatibility testing.  You may want to review their recommendations carefully, however, to ensure that each point is what you want.

    With the 8000, I ran with Round-Robin.  This is the option of creating more robust paths available to you from a failover and performance point of view and can provide performance more even through the ports on the storage controller.

    While I did of the specific tests/validation, the last time that I looked at the docs, the configuration of HP recommends that you configure each IO to the ports in the switch configuration.  This adds the charge to the ESX host, the switch to other ports, but HP claims that their tests showed that it is the optimal configuration.  It was the only parameter I wondered in their recommendation.

    If you haven't done so already, be sure to download the HP doc on configuring ESX and EVA bays.  There are several parameters that you must configure the policy path, as well as a few scripts to help make the changes.

    Virtualization of happy!

    JP

    Please consider awarding points to useful or appropriate responses.

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices Apple ID

    I help the family members and others with their Apple products. Probably the problem number one revolves around Apple ID I saw users follow these steps:

    (1) share IDs among the members of the family, but then wonder why messages/contacts/calendar entries etc are all shared.

    (2) have several Apple IDs willy-nilly associated with seemingly random devices. The Apple ID is not used for anything.

    (3) forget passwords. They always forget passwords.

    (4) is that I don't really understand. They use an e-mail from another system (gmail.com, hotmail.com, etc) as their Apple ID. Invariably, they will use a different password for their Apple ID than the one they used for other email, so that they are constantly confused about which account to connect to.

    I have looked around for an article on best practices for creating and using Apple ID, but could not find such a position. So I thought I would throw a few suggestions. If anyone knows of a list or wants to suggest changes/additions please feel free. Here are the best practices for normal circumstances, i.e. not cooperate accounts etc.

    1. every person has exactly 1 Apple ID.

    2. do not share Apple ID - share content.

    3. do not use an email address of another counts as your Apple ID.

    4. When you create a new Apple ID, don't forget to complete the secondary information to https://appleid.apple.com/account/manage. It is EXTREMELY important questions your email of relief and security.

    5. the last step is to collect the information that you entered in a document and save to your computer AND print and store it somewhere safe.

    Suggestions?

    I agree with no. 3, it is no problem with using a addressed no iCloud as the primary ID, indeed, depending on where you set up your ID, you may have no choice but to.

  • Best practices Upgrade Path - Server 3 to 5?

    Hello

    I am trying a migration and upgrade of a server in the Profile Manager. I currently run an older mac mini Server 10.9.5 and Server 3 with a vast installation of Profile Manager. I recently successfully migrated the server itself out of the old mac mini on a Xserve end 2009 of cloning the drive. Still of double controls everything, but it seems that the transition between the mini and the Xserve was successful and everything works as it should (just with improved performance).

    My main question is now that I want to get this software-wise at day and pass to the Server 5 and 10.11. I see a lot of documentation (still officially Apple) best practices for the upgrade of the Server 3 to 4 and Yosemite, but can't find much on the Server 5 and El captain, a fortiori from 3 to 5. I understand that I'll probably have to buy.app even once and that's fine... but should I be this staging with 10.9 to 10.10 and Server 4... Make sure that all is well... and the jump off 10.11 and Server 5... Or is it 'safe' (or ok) to jump 3 to 5 Server (and 10.9.5 to 10.11.x)? Obviously, the AppStore is pleased to make the jump from 10.9 to 10.11, but once again, looking for best practices here.

    I will of course ensure that all backups are up-to-date and make another clone just before any which way that take... but I was wondering if someone has made the leap from 3-5... and had things (like the Profile Manager) still work correctly on the other side?

    Thanks for any info and/or management.

    In your post I keep the Mini running Server 3, El Capitan and Server 5 install the Xserve and walk through setting up Server 5 by hand. Things that need to be 'migrated' as Open directory must be handled by exporting the mini and reimport on Xserve.

    According to my experience, OS X Server facilities that were "migrated" always seem to end up with esoteric problems that are difficult to correct, and it's easier to adopt the procedure above that to lose one day try.

    YMMV

    C.

  • TDMS & Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • Best practices for the .ini file, reading

    Hello LabViewers

    I have a pretty big application that uses a lot of communication material of various devices. I created an executable file, because the software runs on multiple sites. Some settings are currently hardcoded, others I put in a file .ini, such as the focus of the camera. The thought process was that this kind of parameters may vary from one place to another and can be defined by a user in the .ini file.

    I would now like to extend the application of the possibility of using two different versions of the device hardware key (an atomic Force Microscope). I think it makes sense to do so using two versions of the .ini file. I intend to create two different .ini files and a trained user there could still adjust settings, such as the focus of the camera, if necessary. The other settings, it can not touch. I also EMI to force the user to select an .ini to start the executable file using a dialog box file, unlike now where the ini (only) file is automatically read in. If no .ini file is specified, then the application would stop. This use of the .ini file has a meaning?

    My real question now solves on how to manage playback in the sector of .ini file. My estimate is that between 20-30 settings will be stored in the .ini file, I see two possibilities, but I don't know what the best choice or if im missing a third

    (1) (current solution) I created a vi in reading where I write all the .ini values to the global variables of the project. All other read only VI the value of global variables (no other writing) ommit competitive situations

    (2) I have pass the path to the .ini file in the subVIs and read the values in the .ini file if necessary. I can open them read-only.

    What is the best practice? What is more scalable? Advantages/disadvantages?

    Thank you very much

    1. I recommend just using a configuration file.  You have just a key to say what type of device is actually used.  This will make things easier on the user, because they will not have to keep selecting the right file.

    2. I use the globals.  There is no need to constantly open, get values and close a file when it is the same everywhere.  And since it's just a moment read at first, globals are perfect for this.

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

Maybe you are looking for

  • Features of the Satellite A200 they stay if I install 3rd party software?

    I have a question:I have a laptop Satellite A200 and my software is lost and I want to install a different but not the same (for example buy one by my).The features of Toshiba will remain?

  • Replacement for HP Pavilion 15-e070sl LCD screen

    Hello, I need some advice because I have to replace the screen of my laptop out-of-warranty. I've seen more than one tutorial to find out how to remove it, but I need to know how to properly check the LCD model since there are apparently many differe

  • Wil Windows does not recognize a SATA Hitachi Travelstar HARD drive

    Drive HARD Hitachi Travelstar SATA will not be recognized by Windows Vista.  What can I do? WINDOWS VISTA (64-bit)I have updated all drivers and updates on the site. BIOS SATA value "compatibility." I now have the following devices in Device Manager:

  • EchoLink error-om 1 port is not availible use your push to talk button

    On my program of the Echolink each time I open I will give me an error; com port 1 is not availible use your push to talk button!How activate this... also the media cannot. Thank you! Moved from feedback Original title: software

  • Update Adobe CC

    Hello.Update CC wants to install an update (LR, CC, Windows 8.1, German Version). The application starts downloading and stops at 2% with the information, that it wants to connect to a server. It's for hours. And, Yes, I am connected to the internet