Continuous recording 6008 to PDM

I need to record data over long periods of time, using NIDAQmx and WinXP with a USB NIDAQ 6008.  Record to the TDMS format is probably very effective since the data is stored in binary format, 2 bytes per channel. I made some changes to one of the files example in order to obtain continuous to a PDM file. Allow me to include the source code hacked and the display of error message. Note that the size of the buffers have been adjusted in sector size (4096) on the Windows XP computer. Two problems:

(1) data reading takes place approximately 3.5 times more slowly than anticipated, judging by the screen updates every 1000 measures.

(2) after approximately 3300 samples, the program crashes with the error attached screen, indicating the saturation of data in the data buffer.

If I change the data rate for example 500 samples/s instead of 1000 samples/s, there is no difference in behavior, so is probably not because of scheduling problems.

Change the ConfigureLogging parameter of Val_LogAndRead to Val_Log (naturally!) produced an early error message, which indicates that data reads may occur while recording takes place. I strongly suspect a coding error.

Any advice will be highly appreciated. Best regards, Willem

Hello. I'll try to answer some of your questions.

The value of No.Samples / channel in affected ReadAnalog64F64 work and product of errors when certain values were used. I finally got him to the system default value. What does this parameter actually mean if you samples at a frequency of 1 Hz?

 

When you're constantly sampling, the number of samples per channel setting indicates overall how many values to withdraw from the buffer each time. By default, you put in (DAQmx_Val_Auto) is - 1, which means that each time, he grabs all available in the buffer. If the table you are dumping is too small for this (according to the parameter ArraySize), based on the values you were here before, you might not have been attracting enough data at the same time to avoid that the buffer overflows.

Is the parameter ArraySize in ReadAnalogF64 must exactly match the size of the array [] data?

I think this technically isn't "need." It basically puts forward for how the points table is great. If you put a value smaller than the actual size of the table, then you'll be sure mistakes. If you put a number larger than the actual size of the table, and then the program attempts to write on the Board with more values it can hold, you will probably get an index of table by mistake. Don't forget, as I said in the question above, when the function DAQmx entering values in the buffer zone, it examines the ArraySize parameter to check its limit, not in the table itself (it's how that points to a table usually works, and is a common point of frustration in C code).

Should it be the same as the last parameter in CfgSampClkTiming?

N ° the last parameter in this function in continuous mode, actually used to determine the buffer size (Note: this does not set the size of the buffer, it helps to determine what size of buffer).

In addition, how sampsPerChannelToAquire is bound to the values of the parameters in RegisterEveryNSamples?

You are right that they do not relate. The settings in RegisterEveryNSamplesEvent set the number of samples waiting to happen in the buffer (or leave the buffer in case of output) before firing an event.

You can find much more information on this topic in LabWindows help. If you go to LabWindows > help > contents > choose the Index tab and scroll to the bottom in alphabetical order for «DAQmx»... ", then you will find the list of all these functions DAQmx. If you double-click on one, you get a description of the function and each of the parameters. This might help clear things for you. All these names sound very similar settings and can be confusing, so it is a reference tool handy to clarify things.

Tags: NI Hardware

Similar Questions

  • Real-time display at the high frequency of data acquisition with continuous recording

    Hi all

    I encountered a problem and you need help.

    I collect tensions and corresponding currents via a card PCI-6221. While acquiriing data, I would like to see the values on a XY graph, so that I can also check current vs only voltage/current / time. In addition, data should be recorded on the acquisition.

    First, I create hannels to analog input with the Virutal DAQmx channel create, then I set the sampling frequency and the mode and begin the tasks. The DAQmx.Read is placed in a while loop. Because of the high noise to signal, I want to average for example every 200 points of the current and acquired for this draw versus the average acquisition time or average voltage. The recording of the data should also appear in the while loop.

    The first thing, I thought, was to run in continuous Mode data acquisition and utilization for example 10 k s/s sampling frequency. The DAQmx.Read is set to 1 D Wfm N Chan N Samp (there are 4 channels in total) and the number of samples per channel for example is 1000 to avoid the errors/subscribe for more of the buffer. Each of these packages of 1000 samples should be separatet (I use Index Array at the moment). After gaining separate waveforms out of table 1 d of waveforms, I extracted the value of Y to get items of waveform. The error that results must then be treated to get average values.

    But how to get these averages without delaying my code?

    My idea/concern is this: I've read 1000 samples after about 0.1 s. These then are divded into single waveforms, time information are subtracted, a sort of loop to sprawl is used (I don't know how this exactly), the data are transferred to a XY Chart and saved to a .dat file. After all that's happened (I hope I understood correctly the flow of data within a while loop), the code in the while loop again then 1000 samples read and are processed.

    But if the treatment was too long the DAQmx.Read runs too late and cycle to cycle, reading buffer behind the generation of data on the card PCI-6221.

    This concern is reasonable? And how can I get around this? Does anyone know a way to average and save the data?

    I mean, the first thing that I would consider increasing the number of samples per channel, but this also increases the duration of the data processing.

    The other question is on the calendar. If I understand correctly, the timestamp is generated once when the task starts (with the DAQmxStartTask) and the time difference betweeen the datapoints is then computed by 1 divded by the sampling frequency. However, if the treatment takes considerable time, how can I make sure, that this error does not accumulate?

    I'm sorry for the long plain text!

    You can find my attached example-vi(only to show roughly what I was thinking, I know there are two averaging-functions and the rate are not correctly set now).

    Best wishes and thank you in advance,

    MR. KSE

    PS: I should add: imagine the acquisition of data running on a really old and slow PC, for example a Pentium III.

    PPS: I do not know why, but I can't reach my vi...


  • How to continue recording on a new excel sheet?

    Hello

    I have a program that records the data never 0.2 seconds to an excel spreadsheet, but it should be able to record continuously up to 48 hours. Since excel has a limit of 65 535 lines I need to open a new worksheet all 3.6 hours, in order to continue the overall process. I tied the program here, is quite simple and easy to follow.

    This program will be used to record the flow of mass/energy by a fuel cell drive system. But right now I am working with this demo to get ready before recording actual data. If anyone can help me understand how to do this, I would appreciate it a lot.

    Thanks in advance for your time.

    Jose

    Hello

    No problem. This is the VI. I've also attached a screenshot of my VI in the case where the backup to the previous version, one does not work.

    Kind regards

    Toader Mihai

  • Trouble when using the FPGA to record data on PDM

    Hello world

    I encounter something strange with the use of FPGAS to save data to the disk. I use 4 channels and allows saving data PDM. I use 4 different graphs to observe each channel and no bad thing. But when I check the data stored in PDM, they are not saved in the order that they are supposed to be. Each column in the PDM should represent 1 signla of a channel, but the result shows that the data seems to fill a single column, then another. I do not understand how it becomes like this. I have attached a few photos for your information.

    The data of the FIFO.read must be 1 d table, right? So when it passes through the form 'table' in my VI, it should become a 2D of the 4 column table. Each column represents the data for each channel. I don't see any problem with this Vi. Why PDM data are not saved in the right direction?

    Thank you

    In my opinion, that you simply return your dimension sizes.  The number of rows should be 500 and your number of columns must be 4.

  • continuous recording of samples in file .lvm with data acquisition

    Hello

    I use DAQ to acquire the analog signal and I'm saving samples k 1 .lvm file using file capable of writing with a sampling frequency = 1 k, samples = 1 k in continuous mode. I am able to see the data stored in the .lvm file like this 1 k samples then a few lines of words (like channels, samples, date etc.) and then other samples of k 1 and so on not worked the .vi stopped but how to save the continually samples placed in without any line of words I mean save continuous samples.

    The sequence is shown it the file attached is .lvm.

    Thank you.

    It would have been preferable to set the code when you set up the Express VI. You just leave the Express VI in its default settings? Have you tried to change the configuration to one header only?

    Your choice of an Express VI means you have limited options. If the options do not match your needs, you will need to use lower entry level file functions.

  • binary continuous record and then converted to the spreadsheet

    Hello

    I'm trying to save time without interruption loop of stamps in a binary file with each iteration of a while, and then when the code is stopped, I would like this file converter to a csv file.  I have an application problem, could someone point out where I'm wrong, thanks!

    Check out: How to write and read binary files

    'precede the size of table or chain' must be = TRUE on the binary writing.

    I don't know why you want to save as a 2D single element array.  A simplified version is attached.

  • SQL - Find Records continues

    I'm looking for some SQL tips on finding continuous records in a table. The table in question looks like this:

    ID LITH DEPTH

    1-1 150 SAND
    1-1 COAL 200
    1-1 SAND 250
    1-1 COAL 300
    2-2 SAND 75
    2-2 COAL 100
    2-2 COAL 150
    2-2 COAL 200
    2-2 COAL 250
    SAND 2-2 300
    2-2 COAL 400
    2-2 COAL 450

    I am trying to locate the records marked in bold above and count the number of times they occur. In the example above, I would like to come back:

    ID of account
    1-1 null
    2-2 4
    2----2 2

    I know it's a problem that can be solved outside the database with excel for example. However, I'd appreciate it really all the tips on how to solve this problem with SQL.

    The following lists all the consecutive depth (50 step):

    SQL> with tab as (
      2  select '1-1' id, 'SAND' lith, 150 depth from dual union
      3  select '1-1' id, 'COAL' lith,  200 from dual union
      4  select '1-1' id, 'SAND' lith,  250 from dual union
      5  select '1-1' id, 'COAL' lith,  300 from dual union
      6  select '2-2' id, 'SAND' lith,  75 from dual union
      7  select '2-2' id, 'COAL' lith,  100 from dual union
      8  select '2-2' id, 'COAL' lith,  150 from dual union
      9  select '2-2' id, 'COAL' lith,  200 from dual union
     10  select '2-2' id, 'COAL' lith,  250 from dual union
     11  select '2-2' id, 'SAND' lith,  300 from dual union
     12  select '2-2' id, 'COAL' lith,  400 from dual union
     13  select '2-2' id, 'COAL' lith,  450 from dual
     14  )
     15  select id, lith, depth, max(level)
     16  from tab
     17  where connect_by_isleaf=1
     18  connect by prior id = id and prior lith = lith and prior depth=depth+50
     19  group by id, lith, depth
     20  order by id, depth;
    
    ID  LITH      DEPTH MAX(LEVEL)
    --- ---- ---------- ----------
    1-1 SAND        150          1
    1-1 COAL        200          1
    1-1 SAND        250          1
    1-1 COAL        300          1
    2-2 SAND         75          1
    2-2 COAL        100          4
    2-2 SAND        300          1
    2-2 COAL        400          2
    
    8 rows selected.
    

    And what follows hiding records at least two of the following records:

    SQL> with tab as (
      2  select '1-1' id, 'SAND' lith, 150 depth from dual union
      3  select '1-1' id, 'COAL' lith,  200 from dual union
      4  select '1-1' id, 'SAND' lith,  250 from dual union
      5  select '1-1' id, 'COAL' lith,  300 from dual union
      6  select '2-2' id, 'SAND' lith,  75 from dual union
      7  select '2-2' id, 'COAL' lith,  100 from dual union
      8  select '2-2' id, 'COAL' lith,  150 from dual union
      9  select '2-2' id, 'COAL' lith,  200 from dual union
     10  select '2-2' id, 'COAL' lith,  250 from dual union
     11  select '2-2' id, 'SAND' lith,  300 from dual union
     12  select '2-2' id, 'COAL' lith,  400 from dual union
     13  select '2-2' id, 'COAL' lith,  450 from dual
     14  )
     15  select id, count from (
     16  select id, lith, depth, max(level) count
     17  from tab
     18  where connect_by_isleaf=1
     19  connect by prior id = id and prior lith = lith and prior depth=depth+50
     20  group by id, lith, depth
     21  order by id, depth
     22  )
     23  where count>1;
    
    ID       COUNT
    --- ----------
    2-2          4
    2-2          2
    

    Do you really need the folder with ID = 1-1' and count = null?

    Max
    [My Italian blog Oracle | http://oracleitalia.wordpress.com/2010/02/07/aggiornare-una-tabella-con-listruzione-merge/]

  • How to record a FFT in no time fixed

    Hello!

    I'm new to LabView and I work with version 12.0. I developed my VI by search through the internet and using the help option, but now I got to a point where I can not find a solution.

    I'm analyzing a signal, which i Messure continiously with 2 Mhz. (Puffer = 10 000).

    So far, I've saved the FFT by activating the hypothesis of backup through the Boolean operator. This was allowed in order to document the rough behavior of the signal, but I need a continuous record now.

    It would be perfect to make a snapshot of the FFT in no time fixed and save data with time according to in an excel file. In the first place, the result should be something like a matrix that I can use for an additional post-processing.

    I would like to run this VI more likely a few hours and I realized, that too much data is created.

    Is there a possibility to save for example only every 10th point of data or any other way to reduce the amount of data?

    Attached is my attempt so far.

    With greetings

    Hi, Mr. Pete,.

    You are welcome. If you want VI to write to TDMS as shown in your screenshot attached, you can change the code as follows:

    Since for every 'real deal', so we store the data in a column. So what we want to do, is that for each real, we are writing to different column, but with a different as header 1 FFT, FFT 2 and etc. So true as shown below, you must add a counter (we'll call it the column counter) that is incremented every time it's true. Of course, it must initialize the counter of the column outside the while loop so that we start to TFF 1 every time you start the VI. Let's say for the first real trigger of the data storage meter, the meter of the column is equal to 1. However, this counter of the column should be converted to a string, so we use "format in string.vi. So, since one uses a common name, we use FFT at the beginning of the string.vi of conconate. As the results of the conconate string.vi will be 'TNI 1' and data will be stored in a column called "TFF 1". This will be repeated for each increment of counter column (to be noted that this increase count of column occurs only if the case is true).

    So given that we do not want this VI writing on the same column continuously while the VI runs continuously. We are to use the shift registers to inherit the value brought from the previous iteration to the next. That means that the column counter will always be 1 until the next 10 data points. So when the iteration of the loop is not equal to 10, the column counter is always at 1 until when the loop iteration is equal to 10 (meaning 10 data point), the box structure is set to true and the column counter is equal to 2 and therefore to store in the column 'FFT 2 '.

    If this process continues for the FFT 2 and 3 of the FFT, etc. until you stop the vi.

    It will be useful.

    Warm greetings,

    Lennard.C

  • Recording of data every hour, labview stops responding when the program stopped

    Hello. I'm doing a labview program to read the data and recording to a PDM file every hour for as long as it runs. First of all, I wanted to test it on every 15 minutes to work the bugs how. I made the attached VI and simulated data. It works fine but when I press the stop button to stop the program, the mouse cursor becomes the "wait cursor" and it remains like that until the program says "(ne répond pas) ' and then it crashes." Needless to say that the data is not recorded or corrupt. Do not do this with shorter time intervals (say, record data every 15 seconds). Is the long time period why its happening? Is there a better way to address the issue? Thank you!


  • Write continuously measure file problems

    I'm a new user of labview, so forgive me if it's simple, and do your best to not to judge my program because it is a work in progress. I tried to create an acquisition of data using a controlled box of activex and the complexity of a single loop slowed down so much that it doesn't work properly. I changed a loop of consumers/producers now, but when I try to start logging, it connects everything first and will not continue record all instances more unless I keep the enqueue switch. I do something wrong or is there a better way to do it?

    Your VI shows that you need the "enqueue element" button lit in order to the data in the queue to your consumer loop.  It is set to 'switch' action instead of "closure", which means it will remain lit once pressed until what you press it again to turn if off.

    But I think the problem you encounter, it's that you have a structure of the event in the same loop that has no timeout on it.  Event structure is sitting there waiting until that event occurs, then it lets you iterate, the loop can come back to wait on the structure of the event.  Events that are recorded in the event structure are stop, change the value;  Start logging, change the value; and the change in the value of several of your digital controls.  So unless one of these events occurs, your producer loop will keep a break.  I would recommend either a value of timeout on the structure of the event, or displacement of the structure of the event out of the loop and put in its own loop 'user interface '.

  • PowerShot SX510 HS automatically stops video recording after 10-15 minutes in the record.

    PowerShot SX510 HS automatically stops video recording after 10-15 minutes on continuous recording. Is there a way to disable this option? I would like to record videos up until I press the button stop recording or the battery is low. I've disabled all the setting of energy saving, at least, I think I have.

    The camera has some limitations for registration, as do all models of Canon. Here's how you can extend your durations for registration to a maximum of about 1 hour, but there is a limit of 4BG regardless so once he hits 4 GB recording stops and has to be restarted.

    Also, the memory card has to be at least a class 6 rating or recording will stop as soon as possible regardless of the size of the clamp.

    It comes to page 161 of your manual

  • Voice over recording change vs multitrack view

    Hi guys, need some advice here.

    I always taped voice-over in editing with Audition 3.0 (Windows 7 64-bit)

    Recently, I bought a Creative sound card (X - Fi Titanium HD) that sounds good and that records 24 / 96 kHz with excellent results using several different microphones, mainly an EV RE20.

    I have the main of my mixer left and right coming out using cables of 1/4 inch outputs and enter the audio card to the left and right inputs via RCA...

    In the audio hardware setup Auditions change the display, there is only a choice either to the left or right to the but not a mixture of the two as a starter. (aid to the left)

    Do not use the input on the card because I wanted to use the ins RCA (left/right) on the sound card and microphone input is a 1/8 input that would require another Y microphone cable 1/4 to 1/8 and the fact that I had issues with 1/8 entries before.

    Then, when I record the voiceover in the edit mode, I think I'm really only half of the signal (to the left) which means that I need to gain preamp more that I would get a good level.

    Recently, I have been recording in multitrack view using the 1-way left and right track 2 at the and then doing a mono mix that translates into a very pleasant without level to increase the gain a lot.

    So I guess my questions would be I'm actually saving half the signal change their view by simply using the left

    and if that's true, I just have to continue recording in multitrack view using the configuration I described above.

    Any advice would be greatly appreciated.

    It's just a signal level - you actually lose anything. You can't actually say it is 'half' a signal - it is just a signal at a different level. And you can always get the audition level without loss by record or convert your signal in 32 bits (native format of the hearing) to floating point and it normalize to the level of what you want. When you have only a mono microphone - a track is all you need, there is absolutely no benefit to the recording of two tracks. As to how you save (MV or EV), well it is entirely up to you – that really suits you. The only notable difference is that EV record in a temporary file to be saved in a permanent file later and MV went directly to a file.

    Do you use a microphone preamp with your microphone if you use the entries in the? If you're not, you don't get something like the good structure with the card, such as a microphone does not what whether as enough signal to drive an entry to the correctly. It is a hardware problem, and not one that you can correct for with software, because it will significantly change the signal to noise ratio.

  • Will I lose data conversion to double precision single-precision float?

    Before you say Yes...

    I use a crio unit in scan mode interface. Which returns the mode of scanning values in floating point double precision. Apparently I'm supposed to be able to choose between double precision fixed-point ("not calibrated") and ("calibrated") floating point data, but this feature seems to be exclusive to the fpga interface and is not available with the scan engine. The two data types are 64 bits per value, when it comes to the size on disk, anyway is still basically the same.

    The system continuously records 13 channels of comma floating double-precision at 200 Hz. using the binary file write, I measured it is about 92 MB/hr on the disk. (more than 120 MB/hr with PDM and much more to write on the worksheet) In short, this 92 mb/hr rate is just too much data on the disk on this system.

    The modules that I record since, 9236 9237 and 9215 c-series modules, have 24-bit a/d converters or less. Does this mean that I have not need 64-bit to represent the number and accuracy even?

    Can I force/cast point values double-precision floating that I receive of the variables of e/s scan engine to the data type of a different, smaller, as a single-precision floating point and retain the same precision?

    Nickerbocker wrote:
    between noise and precision equipment, I doubt that it makes much difference.

    You can test it by looking at the difference between a DBL and converted to SGL DBL. But I support the Nickerbocker trick. I don't think it will make the difference

  • TDMS data corrupted %2C puree in a channel

    Hello

    I wonder if there is someone who can cut the corrupted entries from PDM?

    There is a continuous measurement with connecting PDM. The measures. the software is in LV 2012.

    On a very rare occasion (I suspect some antivirus locking) the PDM is corrupt, but who wrote (not the cluster error) in the file in the following sequence:

    When the file is opened after the measurement, only the first entries of data x are shown. (A legacy text output shows all the data). Tiara and the native VI to look a PDM show to the rest of the data in a channel (called OutputIndex below).

    Import Excel returns the code (-2503) (not a CT file or similar). TDMS defragment returns the same thing. Open and close screw return no error.

    I shrugged the PDM here: ftp://ftp.ni.com/incoming/vojtest28.tdms

    Thank you

    c.

    Except the error handling and opening, closure of the DDHN th file code looks ok. So, let me know if change you for records to help offset. You can management of errors on the end of the loop - if an error occurs just stop the loop for or something like that.

    In my opinion most probably because of corrupted files are opening and closing of each iteration file... If you can not change it, maybe try adding some wait function in the loop? Start from high values (100 ms) and lowest and see if the error still occurs. It's a job autour, but if you are unable to change opening and closing it is my only idea for the moment...

    If you can, thanks to uploader the tdms file even once, because I can not download it, I don't know why.

    And the last is my little advice try to keep the wires as directly as possible - especially in this case a simple as wire error en on the beginning of the loop: you're going down with the cable, enter the case structure, reenter the loop, then place, why not put it directly? It is always easier to debug and read such a code.

    Best regards

  • Smart way to save large amounts of data using the circular buffer

    Hello everyone,

    I am currently enter LabView that I develop a measurement of five-channel system. Each "channel" will provide up to two digital inputs, up to three analog inputs of CSR (sampling frequency will be around 4 k to 10 k each channel) and up to five analog inputs for thermocouple (sampling frequency will be lower than 100 s/s). According to the determined user events (such as sudden speed fall) the system should save a file of PDM that contains one row for each data channel, store values n seconds before the impact that happened and with a specified user (for example 10 seconds before the fall of rotation speed, then with a length of 10 minutes).

    My question is how to manage these rather huge amounts of data in an intelligent way and how to get the case of error on the hard disk without loss of samples and dumping of huge amounts of data on the disc when recording the signals when there is no impact. I thought about the following:

    -use a single producer to only acquire the constant and high speed data and write data in the queues

    -use consumers loop to process packets of signals when they become available and to identify impacts and save data on impact is triggered

    -use the third loop with the structure of the event to give the possibility to control the VI without having to interrogate the front panel controls each time

    -use some kind of memory circular buffer in the loop of consumer to store a certain number of data that can be written to the hard disk.

    I hope this is the right way to do it so far.

    Now, I thought about three ways to design the circular data buffer:

    -l' use of RAM as a buffer (files or waiting tables with a limited number of registrations), what is written on disk in one step when you are finished while the rest of the program and DAQ should always be active

    -broadcast directly to hard disk using the advanced features of PDM, and re-setting the Position to write of PDM markers go back to the first entry when a specific amount of data entry was written.

    -disseminate all data on hard drive using PDM streaming, file sharing at a certain time and deleting files TDMS containing no abnormalities later when running directly.

    Regarding the first possibility, I fear that there will be problems with a Crescent quickly the tables/queues, and especially when it comes to backup data from RAM to disk, my program would be stuck for once writes data only on the disk and thus losing the samples in the DAQ loop which I want to continue without interruption.

    Regarding the latter, I meet lot with PDM, data gets easily damaged and I certainly don't know if the PDM Set write next Position is adapted to my needs (I need to adjust the positions for (3analog + 2ctr + 5thermo) * 5channels = line of 50 data more timestamp in the worst case!). I'm afraid also the hard drive won't be able to write fast enough to stream all the data at the same time in the worst case... ?

    Regarding the third option, I fear that classify PDM and open a new TDMS file to continue recording will be fast enough to not lose data packets.

    What are your thoughts here? Is there anyone who has already dealt with similar tasks? Does anyone know some raw criteria on the amount of data may be tempted to spread at an average speed of disk at the same time?

    Thank you very much

    OK, I'm reaching back four years when I've implemented this system, so patient with me.

    We will look at has a trigger and wanting to capture samples before the trigger N and M samples after the outbreak.  The scheme is somewhat complicated, because the goal is not to "Miss" samples.  We came up with this several years ago and it seems to work - there may be an easier way to do it, but never mind.

    We have created two queues - one samples of "Pre-event" line of fixed length N and a queue for event of unlimited size.  We use a design of producer/consumer, with State Machines running each loop.  Without worrying about naming the States, let me describe how each of the works.

    The producer begins in its state of "Pre Trigger", using Lossy Enqueue to place data in the prior event queue.  If the trigger does not occur during this State, we're staying for the following example.  There are a few details I am forget how do ensure us that the prior event queue is full, but skip that for now.  At some point, relaxation tilt us the State. p - event.  Here we queue in the queue for event, count the number of items we enqueue.  When we get to M, we switch of States in the State of pre-event.

    On the consumer side we start in one State 'pending', where we just ignore the two queues.  At some point, the trigger occurs, and we pass the consumer as a pre-event.  It is responsible for the queue (and dealing with) N elements in the queue of pre-event, then manipulate the M the following in the event queue for.  [Hmm - I don't remember how we knew what had finished the event queue for - we count m, or did you we wait until the queue was empty and the producer was again in the State of pre-event?].

    There are a few 'holes' in this simple explanation, that which some, I think we filled.  For example, what happens when the triggers are too close together?  A way to handle this is to not allow a relaxation to be processed as long as the prior event queue is full.

    Bob Schor

Maybe you are looking for