Critical section in J2ME

I am trying to find a way to sync a (vector) class variable that is not in a thread. Because I come from a C/C++ background, I was thinking a singleton with a critical section class. Is there a similar screen in J2ME.  The reason is if I can put objects in a vector until I have recover a response to the listener GPS how I'm going to pass the object (coming out of the vector) into another thread of treatment. So the vector will act as an intermediate storage point, but will be accessible by calls to drop objects and the listener to gps out them and their transmission on the processing thread.

I'm not sure I understand your question. The question I tried to reply to your original message was "is there a simple monitor in Java," that meets the article I linked. -What are you trying to do?

// Demonstrates a thread-safe FIFO stack queuepublic class Temp{    // A vector to queue objects for processing    private static Vector Pergatory = new Vector();

    // "Push" an object onto the top of the queue    public static void push(Object obj)    {        synchronized (Pergatory)        {            Pergatory.insertElementAt(obj, 0);        }    }

    // "Pop" an object off the top of the queue    public static Object pop()    {        Object o;

        synchronized (Pergatory)        {            o = Pergatory.firstElement();            Pergatory.removeElementAt(0);        }

        return o;    }

    // Get the object on top of the queue    // without removing it.    public static Object peek()    {        Object o;

        synchronized (Pergatory)        {            o = Pergatory.firstElement();        }

        return o;    }}

By using the methods defined to change the contents of the vector, son would be able to add objects while other threads are the deletion of objects. The Javadocs J2SE say that the Vector class is already synchronized internally. I don't find this in API Javadocs of RIM, however.

Edit for clarity.

Tags: BlackBerry Developers

Similar Questions

  • [ETA: posted in the wrong forum, sorry! will transfer in the relevant forum] playback of samples: atomicity of critical section?

    Hi all

    I have the following situation:
    -a single thread (T1) is constantly reading from a USB NI 9229 module samples into a local buffer and copying a table a (44100 Hz)
    -another thread (T2) is waiting for an order and then copy a lot of A 44100 samples to a local array variable and make a calculation with them
    -array A is protected in a critical section (T1 and T2 enter the critical section when you work with A directly and then leave)

    How "atomic" is the device samples in my array A reading? That is, it is guaranteed that the protection of the critical section in a way that at the time T2 accesses A, T1 has finished writing 44100 samples in there?
    Or should I register with DAQmxRegisterEveryNSamplesEvent() a reminder that 1) reads 44100 samples, 2) warns T2 that the batch is complete and available?

    Thank you in advance!

    As I understand the situation, the answer should be "Yes": the table is protected until you exit the critical section.

    But may I suggest an alternative, perhaps simpler approach?

    If you configure a thread safe queue 44100 elements that ignores old items, with T1 that fills on top, T2 can freely read the queue without disturbing T1, seen available the most recent series of data in any time.

  • Two parallel executions, calling a DLL function

    Hello

    Since this test takes about 6 hours to test my USE, I plan to use the parallel model to test 2 UUT at the same time in parallel.

    I implement the test code as a DLL of CVI.

    However, to my surprise, it seems that the steps that call a DLL function actually traveled in one series, not in parallel:

    Test 2 power outlets if one enters and executes a DLL works, the other waits for the first to complete its operation and return. While the other runs on the same copy of the DLL, so that the DLL global variables are actually shared between executions.

    So if a DLL will take 5 minutes to complete, two executions in the running at the same time take 10 minutes. This isn't a running in parallel in every way.

    What I want and expect also TestStand, was to completely isolate the copies of these two executions DLL such as test two casings could run at the same time the same DLL function by arbitrary executiong their copy of the function, completely isolated from one another.

    So they separated globals, discussions, etc., and two parallel jacks take 5 minutes to run a step, instead of 10.

    Such a scenario is possible?

    If not, how can I use my test in parallel (in truly parallel) when the use of 2-socket test?

    (1) Yes, he'll call the multiple executions in TestStand calling into the same dll in memory the same copy of this DLL. Thus dll called in this way must be thread-safe (that is written in a way that is safe for multiple threads running the code at the same time). This means usually avoiding the use of global variables among other things. Instead, you can store the thread shows in local variables within your sequence and pass it in the dll as a parameter as needed. Keep in mind all the DLLs your dll calls must also be thread-safe or you need to synchronize calls in other DLLs with locks or other synchronization primitives.

    1 (b) even if your dll are not thread-safe, you might still be able to get some benefits from parallel execution using the type of automatic planning step and split your sequence in independent sections, which can be performed in an order any. What it will do is allow you to run Test a socket A and B Test to another socket in parallel, and then once they are then perhaps test B will take place on one and test one run on the other. In this way, as long as each test is independent of the other you can safely run them in parallel at the same time even if it is not possible to run the same test in parallel at the same time (that is, if you can not run test on two Sockets at the same time, you might still be able to get an advantage of parallelism by running the Test B in one take during the tests in the other. See the online help for the type of step in autoscheduling for more details).

    (2) taken executions (and all executions of TestStand really) are threads separated within the same process. Since they are in the same process, the global variables in the dll are essentially shared between them. TestStand Station globals are also shared between them. TestStand Globals file, however, are not shared between runs (each run gets its own copy) unless you enable the setting in the movie file properties dialog box.

    (3) course, using index as a way to distinguish data access are perfectly valid. Just be careful that what each thread does not affect data that other threads have access. For example, if you have a global network with 2 elements, one for each grip test, you can use safely the decision-making of index in the table and in this way are not sharing data between threads even if you use a global variable, but the table should be made from the outset before start running threads , or it must be synchronized in some way, otherwise it is possible to have a thread tries to access the data, while the other thread is created. Basically, you need to make sure that if you use global data which the creation/deletion, modification and access in a thread does not affect the global data that the other thread use anyway in or we must protect these creation/deletion, modification and access to global data with locks, mutex or critical sections.

    Hope this helps,

    -Doug

  • Create a data streaming from C++ stream and read it in LabView

    Hi all.

    I'm working on a project that is to connect to a tracker of movement and reading data of position and orientation of this in real time. The code to get the data is in c ++, so I decided that the best way to do it would be to create a c++ DLL that contains all the functions necessary to first connect to the device and it reads the data and use the node to call a library function to power the Labview data.

    I have a problem, because, ideally, I would like a continuous flow of data from the code c ++ in Labview, and I don't know how to do this. Put the node function of library call for a while loop seems like an obvious solution, but if I do it this way I'd have to reconnect to the device whenever I get the data, which is quite a bit too slow.

    So my question is, if I created c ++ function that creates a data stream, could I read that in Labview without continually having to call a function? I would rather have only to call a function once, and then read the data stream until a stop command is given.

    I'm using Labview 2010, version 10.0.

    Apologies if the question is badly worded, thank you very much for your help.

    Dave

    dr8086 wrote:

    This method resembles an excellent suggestion, but I have a few questions where I don't think I understood fully.

    I understand the basic principle is to use a call library function node to access a DLL that creates an instance of the device object and passes a pointer too in labview. Then a separate call library function node would pass this pointer to another DLL that could access the device object, update and read the data. This part could be in a while loop, then continue on reading the data until a stop command is given.

    That's all. I'm including some skeleton for example code. I am also including the code because I don't know how you experience multi threading so I show how you can use critical sections to avoid interactions between threads so that they do not lead to questions.

    // exported function to access the devices
    extern "C"  __declspec(dllexport) int __stdcall init(uintptr_t *ptrOut)
    {
        *ptrOut= (uintptr_t)new CDevice();
        return 0;
    }
    
    extern "C"  __declspec(dllexport) int __stdcall get_data(uintptr_t ptr, double vals[], int size)
    {
        return ((CDevice*)ptr)->get_data(vals, size);
    }
    
    extern "C"  __declspec(dllexport) int __stdcall close(uintptr_t ptr, double last_vals[], int size)
    {
        int r= ((CDevice*)ptr)->close();
        ((CDevice*)ptr)->get_data(last_vals, size);
        delete (CDevice*)ptr;
        return r;
    }
    
    // h file
    // Represents a device
    class CDevice
    {
    public:
        virtual ~CDevice();
        int init();
        int get_data(double vals[], int size);
        int close();
    
        // only called by new thread
        int ThreadProc();
    
    private:
        CRITICAL_SECTION    rBufferSafe;    // Needed for thread saftey
        vhtTrackerEmulator *tracker;
        HANDLE              hThread;
        double              buffer[500];
        int                 buffer_used;
        bool                done;       // this HAS to be protected by critical section since 2 threads access it. Use a get/set method with critical sections inside
    }
    
    //cpp file
    
    DWORD WINAPI DeviceProc(LPVOID lpParam)
    {
        ((CDevice*)lpParam)->ThreadProc();      // Call the function to do the work
        return 0;
    }
    
    CDevice::~CDevice()
    {
        DeleteCriticalSection(&rBufferSafe);
    }
    
    int CDevice::init()
    {
        tracker = new vhtTrackerEmulator();
        InitializeCriticalSection(&rBufferSafe);
        buffer_used= 0;
        done= false;
        hThread = CreateThread(NULL, 0, DeviceProc, this, 0, NULL); // this thread will now be saving data to an internal buffer
        return 0;
    }
    
    int CDevice::get_data(double vals[], int size)
    {
        EnterCriticalSection(&rBufferSafe);
        if (vals)   // provides a way to get the current used buffer size
        {
            memcpy(vals, buffer, min(size, buffer_used));
            int len= min(size, buffer_used);
            buffer_used= 0;                 // Whatever wasn't read is erased
        } else  // just return the buffer size
            int len= buffer_used;
        LeaveCriticalSection(&rBufferSafe);
        return len;
    }
    
    int CDevice::close()
    {
        done= true;
        WaitForSingleObject(hThread, INFINITE); // handle timeouts etc.
        delete tracker;
        tracker= NULL;
        return 0;
    }
    
    int CDevice::ThreadProc()
    {
        while (!bdone)
        {
            tracker->update();
            EnterCriticalSection(&rBufferSafe);
            if (buffer_used<500)
                buffer[buffer_used++]= tracker->getRawData(0);
            LeaveCriticalSection(&rBufferSafe);
            Sleep(100);
        }
        return 0;
    }
    

    dr8086 wrote:

    My main concern is that the object can get out of memory or be deallocated since it would not take place to any namespace or whatever it is.

    As you create the object with the new, the object expire not until the dll is unloaded or the process (LabVIEW) closes. If the object will remain valid between condition LabVIEW dll calls not a not unload the dll (who does if the screws are closed). When that happens, I don't know exactly what happens to the active objects (that is, if you forgot to call close), I guess the system recovers the memory, but the device could still be opened.

    What to do to make sure that everything is closed when the dll before I could call unloads close and remove the object is whenever I create a new object in the dll that I add to a list when the dll is unloaded, if the object is still on the list, that I'm deleting it.

    dr8086 wrote:

    I also have a more general question of programming on the purpose of the buffer. The buffer would essentially be a large table of position values, which are stored until they can be read in the rest of the VI?

    Yes, see the code example.

    However, according to the frequency with which you need to collect data from the device you have this buffer at all. That is, if take you a sample on every 100ms, then you can remove all threads and buffer related functions and instead to read data from the read feature itself like this:

    double CDevice::get_data()
    {
        tracker->update();
        return tracker->getRawData(0);
    }
    

    Because you need only a buffer and a separate if thread you collect data at a high frequency and you can not lose any data.

    Matt

  • Intermittent DAQ VI bed wrong in Batch mode

    Hi all

    I have a problem in a sequence of test bench, where I use a LabVIEW VI to read about 20 tensions from analog input of a PXI chassis.

    I use the VI in several stages of test of multiple numerical limit, and sometimes the VI returns all zeros (i.e. 0,00000), causing the step to fail.

    If I run the stand-alone VI, I always get a true result (even if it's a reading of 0.001154).

    If I run the sequence in Single Pass mode, I don't get a failure.

    If I run the sequence in Batch mode, with a different sequence file, which also advocates the VI over several steps, I get these occasional failures.

    Thank you

    Martin

    Alternatively, you can consider with locking type not to protect critical sections of your sequence.

    -Doug

  • thread-&gt; PostUIMessage throws com_error Exceptions

    We have developed a TestStand OI (4.2) including a tracelogger function. Code modules (dll is coded in C++) used in the sequences can send trace messages to the IO by using the function PostUIMessage (see the following code):

    Try
    {
    Get the thread running in the context of the sequence
    thread = seqContext-> GetThread();

    thread-> PostUIMessage (static_cast (UIMsg_UserMessageBase + 4)
    Level
    _bstr_t (Msg),
    (TRUE);
    }
    catch (_com_error & com_error) / * API TestStand throws only this kind of exception * /.
    {
    .....
    }

    It happens that a message is sent every 10ms. Generally this works well both when you run the sequence in the sequence editor and in our IO. But after a rogue from time of 10 minutes to a few hours (depends on the frequency of messages) the PostUIMessage throws a com_error Exception.

    Note: The code is reported as critical section to ensure that it works in multi thread environments too.

    Does anyone have any idea what could be the reason for these exceptions and how to avoid them?

    Thanks in advance

    Peter

    Hi Peter,.

    You wrote that this cycle is about 10ms.

    It's fast!  Normally I use these rates in the threads of work or the threads separated from TS.

    If you have such this in your module or the code sequence file.

    Maybe your variables "seqSontext" or "thread" is not valid.

    Before calling the thread-> PostUIMessage check that everything is valid.

    Hope this helps

    Jürgen

  • Timed loop versus While loop

    Most of the machine control software I design have the following structure:

    1. There's a HAND that takes place inside a TIMED LOOP with synchronization of 50ms and priority of 100. His only job is to read / write data from / to DAQMx IO cards.

    2. the MAJOR can call several SUBs based on the choice of the user, and once a submarine is called FP MAIN is closed and the FP SUB opens. All submarines have a States Queued Machine running within a TIMED LOOP with 50ms timing but with priority to 50.

    3. data transfer between HAND / SUB is through function globals - there are many of them based on past data.

    4. all woks fine so far. No need to any RTOS. and platform of WIN7 is alomost standard. I even ran with a timing of 20ms without anything crashing...

    Problem: When there are a lot of file i/o operations in a SUB partciluar, then I have the chance to see several missed iterations. Perhaps the TIMED LOOP is hogging resources.

    What I want to do: convert both TIMED in asnd SUB HAND loops simple while loops.  But I am concerned by the priority - since the MAIN interacts with HW there priority. But with LOOP WHILE how can I ensure this?

    Or is there any replacement / effective way of doing what I do now?

    Rama wrote:

    .... FGV should be thrown out the window...

    Well used for a while, based on many articles in the KB. One of them is locked... and it does not represent the engine of the Action or of the FGV as a villain to avoid.

    The driving force is one of the largest buildings in LabVIEW.  The FGV who does nothing but Get and Set (or writing and reading) is useless and a waste of resources.  Why?  It does nothing to fix possible race conditions (does not protect critical sections) and it is much slower than just using a global variable.  See this example I put in place to see what I mean: an overview of the race Conditions.

    Rama wrote:

    So in the sample that I had attached, what do you think would happen if I just replaced the two loops with the SAME timed in MAIN and SUB wait value ms. is there a work order then?

    When things are at the same time, there is no such thing as the order of execution.  But as I said, it seems that your loop is quite slow, so it's something I would not worry.  Just make sure that you do not have a loop that uses all the CPU.

  • Async timer and Mutex?

    Hi all

    I rarely used async timers, but now I need.

    They run in a separate thread, right?

    That means that in some cases, I need to use mutex or critical sections?

    I remember feature for those in CVI. I usually use the library pthreads on Linux, so I don't know what you guys recommend in CVI.

    Thank you.

    In addition to notes ebalci, take a look at this multithreading documentation:

    CVI is delivered with a complete set of instruments, to share and protect data between threads: TSV beyond already mentioned by ebalci you can can count for example on the locks and Thread-Safe queues.

  • How to check if can device is busy to write or read

    Hi, I'm using NI USB 8472 low speed CAN and I want to check if the device is used.

    because if I send a frame to the device CAN I have a continuous loop, sending another frame and I want to know who I'm not write or read my continuous loop at the same time.

    This loop is made with an asynctimer.

    How can I check if the device is used to prevent writing or reading at the same time?

    Kind regards

    Daniel Coelho

    If you choose to use the CVI locking mechanism or the methods of the critical Section of the SDK, the net effect is exactly: a single function that is used by all parts of your program to access a resource atomically. You needn't semaphores here. For example:

    Critical_section cs;         Declare in a scope that includes all threads wanting to use the resource

    void Init (void) {}

    ...

    InitialiseCriticalSection (&cs);)    Do this somewhere soon, before you actually use

    ...

    }

    void CommFunc (...) {/ / This function represents the resource that are not accessible at the same time}

    EnterCriticalSection (&cs);) Stop another thread to run past here

    ... / / Now this block of code can run without fear to be readmitted

    LeaveCriticalSection (&cs);) Finished - let other threads are using the same code

    }

    My main programming in real time, I tend to use the components of the SDK, as described above. Roberto promotes the CVI ones - I think that in this case, their operation is identical.

    JR

  • Why LabVIEW example projects using Global Variables?

    I'm puzzled.  I've been pretty good programmers LabVIEW talks (including some who work for the OR) and came away with the impression that Global Variables should, as a general rule, be avoided, with functional Global Variables (alias VI Globals) generally preferred for "local memory".

    I have studied some of the example distributed with LabVIEW, 2012 and 2013, in particular the proposed acquisition in real time and am struck by the use of Global Variables, where I'd be inclined to use instead a FGV.  For examples, to stop all the loops on the RT target, the overall "All the RT loop Stop" is defined; 'Constants' of configuration (such as timeouts, Streme network names, the names of the journal folder) are kept as Globals; Streme network endpoints are stored in Globals.

    [Note - there is a weird spelling of the second word of the network Streme, above - when I tried to post with the correct spelling, I got an error message saying this word is 'not allowed in this community".]  I apologize for the offense, but I must confess that I do not understand what the problem with the help of the spelling of this word...]

    Why use Globals in these cases, rather than write a bunch of VIGs to hold these data?  Note that almost all these Globals are 'Read' essentially (written once when a resource is acquired, for example) or "Read Only" (treated as if they were a constant).  Indeed, read-only variables can be written as a Subvi with only an output terminal, acting as a (visible, due to the icon) constant.

    I can see advantages to this approach.  On the one hand, VIGs can have error bounds who run the data flow (I just spotted a bug "data flow" in code, I am developing that is based on this model, to read configuration data to an XML file in a world and in the same VI, Global wiring to a "use - me" terminal, but with no guarantee that I'll read the overall after I write it).

    It is, I suppose, a matter of 'speed' - perhaps Global Variables are 'faster' than VIGs (especially if the VIG 'sits' on an error line).  My thought, however, is that this difference is likely to be trivial, especially as these VIGs (or Globals) tend to become "occasional" calls (with the exception of the indicator 'all the loop Stop' which is called once per line).

    Are there other arguments or considerations that make a Variable global to a better choice than a VIG?  Is there a reason that LabVIEW developers put in these start-up of projects LabVIEW?

    BS

    I have to ask, how do you use functional Global Variables?  Like just a Get and Set?  If so, you can use a global variable.

    Yes, globals are faster and use much less overhead.  At the summits of CLA in recent years, we talked about using globals.  The most common use is for Write-Once-Read Many and writing-never-Read Many with configuration data.  It's a good idea to use globals with the constants that can change on you.  It turns out that the world will have the same performance as a constant in this case.  This is done so that you don't have 1 place to edit the 'constant '.

    The rule on "Globals are evil" actually goes back several years when NEITHER had the huge "people of the country are bad" vendata.  But NEITHER explains well how to do things properly.  So I found people, instead of using local variables, using the value property node.  It's even worse because the property causes thread swaps and kills your performance.  It wasn't until I shouted to people to use wires and shift registers I have seen improvements in the way in which people wrote their code.  So people are always riffling in the use of globals and decided to use FGVs with the EEG and fixed rather cases.  But this does not solve the problem of the conditions of race with critical data and you cause an additional burden.

    So from my experience, I use globals all the time for configuration data.  Yes, you must be careful about the race conditions.  But as long as you understand that it is a common and useful practice.

    I would not use a global variable for data that are constantly changing (use registers to offset or Action motor) and/or processes that have critical sections of code (use a motor of Action).

    NOTE: I use the definition of Mercer to FGV (a Get/Set only) and motor Action (many cases which specifically affect the data).

  • How to avoid race conditions?

    Hi, ive done Labview for a dozen years. And this is the first time ive met or its made my code. I have a lot of code that runs at the same time. And for years, good practice in order to avoid it never repeated?

    Stu

    I spent an entire course on the race conditions and concurency in higher education, and to avoid race conditions, understand why they occur.

    Race conditions occur when several parallel processes or competitor access to a shared resource.  Furthurmore, this is especially annoying when 2 or more threads or processes are seen change a resource.  Then all first identify candidates for this when the architecture of your application.  If you only have one thread, you must be sure (although it gets more blurry since it has the ability to run in parallel).  Is it then all variables/resources that are needed in the 2 wires at the same time, and if so, you change the value in one or more threads.  If so, you have to deal with synchronization.

    Traditionally, you talk semaphors, locks and mutexes.  All methods of locking critical code (ie the resources that are sources of possible race condition).

    BUT WE ARE IN LABVIEW.  LabVIEW has a simple method which could be easier to use, functional overall.

    screws that are not marked for reentrancy is essentially a great way for the protection of data and your locking critacal sections.  That only one thread can access and or modify the data stored in the vi, at the same time.  Other threads will be blocked and wait on the resource.

    It was much more difficult to implement in c.

    Anyway the way you deal with racing conditions is to

    1. identify critical sections and resources which can cause race conditions (usually shared resources with multiple writers)

    2 lock the resource until cooked (avoid deadlocks by per se - but that is another debate)

    Unlike traditional bugs, bugs competitor are more random and very difficult to reproduce/test for so you will have to deal with them in the design of architecture.

  • mutex slow performance in windows xp on the host ESX 5 prompt

    I've done a few consumer casual producer marks written in C with visual studio 2005 on dual core 2 ghz cpu, with processor type and identical win xp professional o.s., it seems to me on ESX, execution mutex, events and corresponding waitforsingleobject is 1/10th to 1/2 the speed on the physical machine, Windows 7 instead the performance is about half of the corresponding physical machine I was wondering if anyone else noticed, this is perhaps due to how mutex/criticism sections/events/semaphores are emulated in the ESX binary translation, there are all the settings of windows xp 32 bits comments under ESX 5 to allow especially stimulate the sync speed of process/thread to make it similar to a physical machine may?

    I am attaching a small test project for vs 2005 in case anyone could take the trouble to try and compare the pure speed of physical synchronization primitives and ESX especially if in win xp there is a penalty of macroscopic speed or maybe I did something wrong as I ran the test on an overloaded perhaps ESX.

    gf111 wrote:

    Is it possible to wake up another process in win xp without incurring TPR registry bashing?

    I do not know.  It is not clear to me why Windows wants to change the TPR in the first place.  The TPR is to block external interruptions, and I see not why Windows would change the external interrupts are blocked, according to what level process user is active.

    I noticed that windows 7 synchronization is much faster on the same ESX with FlexPriority probably not activated, likely due to 'Lazy TPR', which I believe has been implemented in o.s. later than xp, so I guess that the more realistic prospect is to go with win 7, given I don't think that in my particular environment , I'm working on they would be willing to risk life and limb with potentially unstable FlexPriority bios patcher, has taken the fact that xp is end of life taken into charge anyway, but thanks for giving me a clue what may happen anyway.

    Windows XP 64-bit is another option.  He always denigrated on the TPR, but uses instead of MMIO CR8. (AMD hotfix did the same thing with an ISA AMD extension that allows the code to 32-bit access CR8.)

    Note also that FlexPriority is not a panacea.  It can reduce the cost of a TPR of 2000 update ~ cycles to ~ 400 cycles, but it is still far from what it costs in native mode, which is of ~ 2 cycles.

    Binary translation, TPR updates are rather cheap, but BT has its own performance issues.

    Your Windows XP guest running with binary translation or hardware-assisted virtualization?  You can check the file for 'HV Settings' vmware.log know.

  • synchronized on a final map.

    Hi people,

    I know that many people like to use a dummy object on the synchronized statement because it is static, but I wonder if I can use a final map instead? Don't get me wrong, I do not say that I do. According to me, it's wrong, when I saw that someone wrote this code. I just need a second opinion. For me, even the map is final, but this will change over time, when the card get added or removed from the element. It is not static. Not sure that it is good to use in the synchronized state. No matter what entries are going to help, thank you!


    private public static final map < String, Long > MAP = new HashMap < String, Long > (100);

    public static Long getTimestamp (String id) {}
    Long tiimestamp = null;
    {Synchronized (Map)}
    Long T = map.get('zoom') (id);
    If (t == null) {}
    timestamp = System.currentTimeMillis ();
    MAP.put (id, timestamp);
    }
    }
    Back to timestamp;
    }

    public static String releaseLock (String id) {}
    {Synchronized (Map)}
    If (MAP.containsKey (id)) {}
    MAP.remove (id);
    }
    }
    }

    Seems perfectly reasonable to me.

    First you synchronize on variables, you synchronize on objects. And since 'final' refers to the variable, this means that 'final' does not apply to your question.

    And secondly, it does not matter if the object you choose for your synchronization monitor is editable or not. All that matters is that all code in the critical sections (those that should be executed by only one thread at a time) are protected by the same instructor.

    And in this case, since critical sections are all in order to change the plan, it makes perfect sense to use the card itself as the monitor. You can certainly use another object as the monitor, that is the "dummy object" you mentioned, but the encoder must so be sure to use the same dummy object to protect every critical section. It's one extra thing to follow and potentially getting wrong.

  • Performance using the virtualization problem

    If we compare our software installed in a physical host and a comparable VM (same CPU, memory), we notice that the product is twice slower when running in a virtual machine (guest operating system is Windows 2003, the use of an ESXi 4.0 host. The software uses a single CPU)

    As we are not in the production environment, we tried with all other VM turned off. We tested our VM with and without reserve of CPU (best results with), with and without memory (absolutely no difference) and 1, 2 and 4 vCPU

    After several tests, it seems that the problem comes from the use of semaphores: when replacing them with critical sections (but we cannot just replace), the performances are quite the same between the physical host and the virtual machine. All the code is executed with similar performance, but when using semaphores, the VM uses longer than the physical host CPU.

    Someone at - it something on such a problem ears already? Is there a reason to explain the poor performance when using semaphores Windows hosted by ESXi 4.0?

    For example, we wrote a simple program to compare hosted by ESX Windows semaphores (in our laboratory, it took 10 sec on a physical host and 22sec in a virtual machine):

    #include "stdafx.h".

    int _tmain (int argc, _TCHAR * argv)

    {

    unsigned __int64 nCount;

    NTickCount = DWORD: GetTickCount();

    HANDLE hSemaphoreBridgets = CreateSemaphore (NULL, 1, 1, NULL);

    for (nCount = 0; nCount < 10000000; ++ nCount)

    {

    WaitForSingleObject (hSemaphoreBridgets, INFINITE);

    ReleaseSemaphore (hSemaphoreBridgets, 1, NULL);

    }

    printf ("%d s\r\n duration", (: GetTickCount() - nTickCount) / 1000);

    CloseHandle (hSemaphoreBridgets);

    return 0;

    }

    I have your reference of profiling and found that he spends most of his time in these three Windows HAL functions:

    HAL 39.83%! KfLowerIrql

    HAL 19,82%! KeRaiseIrqlToDpcLevel

    19.07% hal! KeRaiseIrqlToSynchLevel

    Hot spots in each function are the access TPR (0FFFE0080h is the address of the TPR in the local APIC):

    HAL! KfLowerIrql:

    807168e4 890d8000feff mov dword ptr ds:------[0FFFE0080h], ecx

    807168ea a18000feff mov eax, dword ptr ds:-[FFFE0080h]

    HAL! KeRaiseIrqlToDpcLevel:

    807168a 0 8b158000feff mov edx, dword ptr ds:-[0FFFE0080h]

    807168a 6 c7058000feff41000000 mov dword ptr ds:------[0FFFE0080h], 41 h

    HAL! KeRaiseIrqlToSynchLevel:

    807168bc 8b158000feff mov edx, dword ptr ds:-[0FFFE0080h]

    807168c 2 c7058000feff41000000 mov dword ptr ds:------[0FFFE0080h], 41 h

    Since the local APIC is virtualized, TPR access usually causes a VM exit under hardware virtualization.  However, Intel has introduced FlexPriority, which prevents the release of the virtual computer for all TPR readings and some TPR Scriptures.  For this reason, ESX 4.0 by default x - VT for 32-bit Windows 2003 on Intel FlexPriority chips.  Unfortunately, FlexPriority isn't a panacea.  On native hardware, TPR access are usually only a few cycles.  With FlexPriority, TPR access which do not cause a VM output may still take hundreds of cycles.  Access TPR cause VM-output take several cycles of a thousand.  Fortunately, we still have the possibility to use binary translation.  Under the binary translation, TPR access usually take dozens of cycles.

    For this particular workload, you must configure your client to use binary translation.  On my system of Penryn, benchmark runs in 22 seconds using VT - x (with FlexPriority), but it takes 13 seconds using the binary translation.  (For completeness, it takes 90 seconds using VT - x without FlexPriority).

    Your client's situation is different.  AMD has never introduced an equivalent to FlexPriority technology.  However, if your customer has configured their VM to use the material support of the MMU, then the virtual computer will use AMD - V, which suffers from the same problems that VT - x without FlexPriority.  Make sure they have configured computer virtual for the MMU software support so that it runs using the binary translation. (For this guest under ESX 3.5 default execution mode is binary translation).

  • Problem to connect to several Vmx files

    Hi guys, I found a bigger problem when you try to connect to multiple vmx files, followed by code below, that it is in a loop, I add in a list and call in a thread pool

    See - & gt; System.Threading.ThreadPool.QueueUserWorkItem (new System.Threading.WaitCallback (ProcessWakeAction), new object [] );

    where "list__server" contains the files of vmx to connect and PowerOn

    PS. If the strength of a sleep too

    System.Threading.Thread.Sleep (System.TimeSpan.FromMinutes (1)); });

    but the error pops up

    can someone help me? Tanks

    tracking code

    VixCOM.IJob jobHandle = s_VixComLib.Connect (global::VixCOM.Constants, .VIX_API_VERSION,

    " global::VixCOM.Constants .VIX_SERVICEPROVIDER_VMWARE_SERVER, ' https:// " +

    System.Net.Dns.GetHostEntry (this.m_Row.ToString ()). AddressList [0] + ": 443/sdk.

    "0, global::GreenCpd.Program .domain +"
    '+ global::GreenCpd.Program .UserName,

    Global::GreenCpd.program .password, 0, null, null);

    //----


    System.Object M_RESULTS = new System.Object ();

    //----


    vixError = jobHandle.Wait (new int [] , ref M_RESULTS);

    //----


    If (vixError is global::VixCOM.Constants .VIX_OK)

    {

    //----


    Critical section {Enter}

    -


    lock (s_VixHost = (global::VixCOM.IHost) ((object[]) M_RESULTS) [0])

    {

    jobHandle = s_VixHost.OpenVM (base.m_Row.ToString (), null);

    vixError = jobHandle.Wait (new int [] {global::VixCOM.Constants, .VIX_PROPERTY_JOB_RESULT_HANDLE}, ref M_RESULTS);

    object [[] M_RESULTS_IDX = M_RESULTS as object [];]

    If (vixError is global::VixCOM.Constants .VIX_OK)

    {

    ((global::VixCOM.IVM)(M_RESULTS_IDX[0])). PowerOn (VixCOM.Constants.VIX_VMPOWEROP_LAUNCH_GUI, null, null). WaitWithoutResults();

    ((global::VixCOM.IVM)(M_RESULTS_IDX[0])). () WaitForToolsInGuest

    int. Parse (System.Configuration.ConfigurationManager.AppSettings,

    (System.Globalization.CultureInfo.CurrentCulture), null);

    s_VixHost.disconnect ();

    Global::GreenCpd.NGLibrary .CEventLogIO .RegisterLog (".Domain global::GreenCpd.Program +")
    " +

    Global::GreenCpd.program .UserName,

    "Finished WakeUp VirtualServer" + base.m_Row);

    }

    on the other

    }

    What error do you get? I can't see the source code, you have, but I do not see the error. Am I missing something?

Maybe you are looking for