sample rate real vs min sampling rate

I'm sure it's an obvious answer, but here goes.

I have a USB-5132 ' scope and using niScope horizontal configuration Timing.vi I put, among other things, the minimum sampling rate. In my case, I chose 20 MHz, which of course gives a sampling of 50 ns period.

I use niScope reading (poly) .vi with the WDT variant to read waveform data.  I noticed something very strange - waveform limit testing throw error 1802 "signals have a dt of different values '-if I put a waveform components unclusterizer Get on the wire of waveform and looked at the value of dt of the wave." He told me that my dt is 40 ns, which of course is of 25 MHz. I also plead for only 2000 samples.

So what causes this shift? Why the digitizer does not accept everything just my desired sampling frequency?

Austin Walton wrote:

Andy,

The setting of minimum sampling frequency is the frequency at which digitized
the samples are stored, expressed in samples per second. This setting is rounded
up to and including the next legal collection that supports your device.  Ownership of the actual sampling rate calculates the actual sample used for the acquisition rate.

Unless you specify another source of the clock, the digitizer uses an internal oscillator as clock source.  For the 5132, this oscillator is clocked at 50 Mhz.  When using the oscillator internal as the sample, the digitizer clock source can use versions split to the bottom of this clock, for certain sampling frequencies are not possible.

Tags: NI Products

Similar Questions

  • Another sim will fit in mini iPad 3 besides Apple?

    I bought a mini iPad 3 (refurb) and he came with a card Sim Apple. So, I bought a byo tablet nano AT & T sim. It is too wide to fit in the status bar. Wil, it works if I cut it down?

    The Mini 3 uses the nano SIM so it must adapt. If it didn't you maybe a real iPad Mini 3.

  • Mini DisplayPort did not end-2009 iMac

    Hi I bought a Premium products Reseller Apple a real Apple Mini DisplayPort, but does not take the iMac late 2009 (slot on the right side of the slot for FireWire)? Should I try to exchange it if I got hurt? Or there is a way to connect?

    Thank you in advance. God bless you. Revelation 21:4

    Are you sure that you have an end of 2009? Can you confirm what he says in 'about this Mac' after the text "iMac"?

  • Nor-6120 - sample real resolution rate

    Hello.

    I use the card OR-6120 and labview 8.2.

    And I have not found information on the resolution of sampling rate.

    So I wonder if anyone knows what is the resolution?

    And I can get the real value of the sampling frequency of the card once I programmed the sampling frequency that I want.

    Thank you!

    Sincerely,

    Roman

    Hi Roman,.

    Referring to the technical details of the frequency of sampling for a PCI/PXI-6120 is 800 kech. / s with NOR-DAQmx divers.

    If you want to get the actual sampling frequency with LabView and DAQmx drivers use a Calendar property node and select sample clock > rate.

    Kind regards

    Ludovic R.

    Links:

    Data sheet OR-6120

  • Conflict to sample rate (specified real vs)

    I use NI 9234 to acquire my sensor data using labVIEW 8.6. I have been using labView for only the past two weeks, so please bear with me as my knowledge is so fundamental. I'm reading several channels over time. My problem is when I finished my VI, I discovered the whenever I change my bit in VI code rate, she even more fast (several sampling rate than what I said).

    I've never used a time base external with a data acquisition card, so it's a bit outside my field of knowledge. You may consult the manual to see if it's possible.

    Personally, I wouldn't bother. If you want a lower sampling rate 1,652 kHz, you could always decimating up to a lower rate. For example, if you enjoy at 1,652 kHz and then take each sample 16, you would then end up with an effective 103,25 Hz sampling rate.

    If you want to exactly 100 Hz you could make, because they suggest in the link and use the 'resample waveforms (continuous) .vi"to re - sample data.

  • Specific sampling rate?

    So maybe this is a stupid question, but I need to know because I train for a specific sound. Is there a way [to logic] to shoot/change of a certain frequency sampling rates. I can imitate the sound I'm looking for with a low pass filter, reverb and a distortion. But I don't want to 'emulate', this sound, I want to create. Then I can put my own effects and play with it like I want to. If I have to use a bunch of effects to make it sound like I want that also the addition of said effects remove the sound and sound horrible. as to where pulling the sampling frequency of the high frequency and no downs will make me THE noise that I need and always allow to add nice effects to make MY sound instead of someone else. I hope you know what I mean. Let me to you specific real once more. I want to pull or carry a certain frequency sampling rates for a sound under water. I don't want to use filters to make the sound. So can you please help me. I invited everyone locally on how to do it and nothing works. Also if this is not possible in the logic of tell me if there are third party plug ins or maybe even a different DAW that could do like komplete Kontrol or audacity.

    Effect under water

    See if this thread is helpful at all...

  • Synchronization of data from different sample of data acquisition rate

    I use a high RT 8135.  I'm sampling of signals from analog pressure thermocouples to 20 ms and 10 ms.  I use the stream network to transfer the data from the SMU on my host.  I would like to be able to synchronize the timestamps of all data to the 1ms sampling note and record in a file.

    Search in the file my sampling data 10ms end timestamp does not match the timestamp of 20ms, missing data... I know you have empty queues to get the rest data but is possible to interpolate any data to adapt a unique timestamp?

    Thank you!

    Hi aokada09,

    Looks like you are facing problems resulting from parallel execution two loops you have.  There is nothing necessarily bind the two loops together, so that each of them an iteration at the rates you specified, but are allowed to start each software (sort of random) dictates that they start.

    To get a solid synchronization, share a sample between the two measures clock, but run the sample clock at 100 Hz for 50 Hz for thermocouple measurement and pressure measurement.  You should be able to use shared inside the SMU chassis backplane clock.  This will be as close as possible to synchronize without using a card of timing.  The only real source of delay/tilt between readings at this point is the physical distance that must travel the clock signal. This will not lead to steep, but there will certainly be some (probably on the scale of the high-nanoseconds or microseconds bass).  This article gives more information about the synchronization and the sample clock:

    http://www.NI.com/white-paper/11369/en/

  • myRIO sampling rate

    I'm new to myRIO and use it to measure sine wave (0V to 5V) of up to 10 Hz 20 KHz. I also quickly transformed of Fourier (FFT) of the signal measured in real time.

    Sideways FPGA of things, I try to keep things pretty simple, just read 2 channels of AI (connector B: AI0 and AI1), therefore potentially able to read each HAVE 250 kech. / s (as the unit has a capacity of 500kS/s). Does that mean this program gets a two analog inputs data exactly every 4 microsecond? If this is not the case, how can I make sure that the data is acquired through a fixed sampling rate?

    I realized that we can add to the FFT in FPGA function, but I wanted to manipulte the acquired data of analog inputs before it is sent to the FFT, which I don't know how to do now. Can someone explain me how do the arithmetic data (muliplication, division and so) on the acquired data and analog inputs to reducde the 12-bit resolution 10-bit to program FPGAS.

    Later, I created a myRIO program to read analog data 2 FPGA program which continues to turn in timed loop. In the program myRIO, the timed loop is configured to 1 MHz clock source type by a delay of 25 microseconds.

    This configuration means that the loop runs exactly every 25 microsecond?

    When I set up the less than 10 micro second time, myRIO has stopped working. Why is it so?

    Is it because myRIO cannot run as fast as FPGA?

    It is advisable to make the FFT of myRIO side analog data or FPGA?

    When I tried to do FFT using the power spectrum of myRIO side, he asked for waveform data. What I acquire is data analog. How can I convert in waveform data?

    If I read in the forum for help, I couldn't have the full answer to my doubts

    Discussions at the Forum I did reference:

    http://forums.NI.com/T5/academic-hardware-products-Elvis/setting-tick-count-in-myRIO-FPGA-software/m...

    http://forums.NI.com/T5/academic-hardware-products-Elvis/myRIO-aggregate-sample-rate/m-p/2707061/HIG...

    A lot of good questions here, I will try to answer as much as I can so as to offer a bit of advice.

    First of all, if you are looking to acquire data at a very specific rate on the FPGA, you'll want to use the Timer VI.  You are also going to use a FIFO of DMA to transfer data of FPGA in real time. A node read-write using as you do now means you'll run out of samples, or read the sample even several times.  The link below is a very good tutorial on how to do what I described above.

    http://www.NI.com/Tutorial/4534/en/

    Later, I created a myRIO program to read analog data 2 FPGA program which continues to turn in timed loop. In the program myRIO, the timed loop is configured to 1 MHz clock source type by a delay of 25 microseconds.

    This configuration means that the loop runs exactly every 25 microsecond?

    When I set up the less than 10 micro second time, myRIO has stopped working. Why is it so?

    Is it because myRIO cannot run as fast as FPGA?

    In general, you should not run a timed loop much faster than 1 kHz.  Using timed inside loop knots, you can monitor the real rate of loop during execution to see if f you meet your needs of the moment.

    The portion of your myRIO RT is slower than an FPGA in the sense where it cannot manage the rates of lines 40 MHz (he makes up for it by being able to work with much better pictures) and it is important to remember that it is just a computer.  The advantage of a real-time operating system, is that you have more control on the Scheduler, not that he is faster (less jitter, not faster code). There is more good reading below.

    http://www.NI.com/white-paper/3938/en/

    It is advisable to make the FFT of myRIO side analog data or FPGA?

    When I tried to do FFT using the power spectrum of myRIO side, he asked for waveform data. What I acquire is data analog. How can I convert in waveform data?

    I would say that it is generally advisable to treat your FFT on the side FPGA as long as you have the resources available, but for many applications probably little matter ultimately.

  • Writing data to extend the acquisition of data for the sampling rate high file

    These are the tasks that I have to do to take noise measurements:

    (1) take continuous data to USB 6281 Office, in a sample of 500 k (50 k samples at a time) rate.

    (2) save data continuously for 3 to 6 hours in any file (any format is OK but I need to save in a series of files rather than the single file). I want to start writing again file after every 2 min.

    I enclose my VI and pictures of my setup of the task. I can measure and write data to the file continuously for 15 minutes. After that, I see these errors:

    (1) acquisition of equipment can't keep up with the software (something like that, also with a proposal to increase the size of the buffer...)

    (2) memory is full.

    Please help make my VI effective and correct. I suggest to remove him "write in the action file" loop of consumption because it takes a long time to open and close a file in a loop. You can suggest me to open the file outside the loop and write inside the loop. But I want to save my data in the new file, after every 2 min or some samples. If this can be done efficiently without using Scripture in the measurement file then please let me know.

    Thank you in advance.

    This example here is for a single file and a channel, you should be able to loop over that automatically. The background commentary should be the name of the channel, no group namede the name of the channel in the control.

  • PCI 5154 sampling rate

    Hello

    What is the sample rate max 5154 PCI for two channel inputs? The manual States the 2GS/s is for one channel only. So, am I not able to get a bandwidth of 1 GHz for the simultaneous measurement of two channels? Thank you!

    Hi gbhaha,

    First of all, TIS mode up to 20 GECH. / s using an ADC, while your real time sampling uses two converters a/n at the same time to a single channel.  Take a look at these diagrams that I linked in my first post for more details on this architects.

    About the difference in the bandwidth between the 5153 and 5154 - the 5153 has 500 MHz of bandwidth in its circuits, even when acquiring at faster sampling rates.  The 5154 1 GHz of bandwidth, this is why it is more expensive.

    Kind regards

  • DMM (NI 4070), how to correctly set AC Freq (bandwidth) by the sampling rate

    using a NI4070 multimeter and I see the max connection is 300 kHz by respect it.  But I don't understand how to set the min and max, acFrequency according to the sampling frequency or speed reading.

    6 1/2 digits resolution, the speed can vary from 0.25 s/s to 100 s/s and this range corresponds to a lower end on the connection (minimum acFreq) from 1 Hz to 400 Hz.

    (Q1a) - is the playback speed, controlled by the minimum setting of IviDmm_ConfigureACBandwidth?   or vice versa?

    Otherwise, I do not see how to control the rate of reading or the sampling frequency.   IviDmm_ConfigureMeasurement only allows you to control the range and resolution.

    (Q1b) - is there a way to directly control the sample rate (digitizer) or playback speed (dmm)?

    (T2) - the upper limit of the bandwidth of AC always seems to be at 300 kHz... is there still a reason to reduce this maximum value?

    (T3) - Finally, unlike the traditional niDmm function, the resolution via the IVI configuration should be passed as absolute value; does directly when number of digits and the beach?   For example if I want to 6 1/2 digit to 300V range, I guess that by the specifications that the resolution should be set at 0.001 V... followign, if I want 5 1/2 digits to 1V range, the resolution should be set to 0.00001 V?

    Hi Rjohnson,

    I'll try to answer your questions as best as I can:

    Q1A.  The ConfigurACBandwidth function is used by the driver OR DMM to calculate the good aperautre for the measure.  So yes, by adjusting your minimum frequency, you will affect your reading speed.

    Q1B.  Your reading rate will depend largely on your measuring cycle.  To get a fast measuring cycle, there are a few things that you can adjust.  You can programmatically control your time aperature, as well as your time to settle.

    Q2. I can't find a reason to change. This parameter is only used for error-checking and verifies that the value of
    This setting is less than the maximum frequency of the device.

    Q2B.  I think what you say is right, but I'll need to check on that - I'll let know you as soon as.

    Hope that helps.  "" "I would recommend checking the explanation of the Cycle of the DMM measurement in DMM help' devices ' NI 4070" DMM Measuments "DMM measurement Cycle.

    Take care!!

  • The NI USRP-295xR RIO allows a front-end Ettus BasicRX IF-sampling? And what is the rate of the ADC to FPGA data?

    Hello!

    I'm in the quest to replace some aging PCI-5640R. I am currently using as a portable-Journal data solution, mounted in a Magma Expresscard to PCI box with a laptop. As a reference of the time, I use a Symmetricom XLi.

    The equipment is dependent on the sampling finished a set of samples once a trigger signal goes high and also receive antenna azimuth information using two lines PFI more. The signal is sampled at IF, 30 MHz, and the signal is less than 5 MHz bandwidth.

    Now, I started watching the Ettus X 301 with a GPS OCXO and MXI-express interface which should be the same as the USRP-295xR NI. It is available as device NI RIO with three different front ends, unfortunately, none of them work at 30 MHz.

    Q-1: Ettus has the front-end 'BasicRX', but it is only considered compatible with LabVIEW driver and not necessarily with the RIO. Is the front-end BasicRX usable with the USRP - 295XR RIO and MXI-interface with LabVIEW FPGA? Should I just avoid trying tune the nonexistent LO? As long as he gives me data, I can live with some error messages during the Setup...

    It's the best solution for me, but if it is absolutely impossible, I have a few questions:

    Q-2: information on the front end are really rare in the pages Web OR both Ettus, but the WBX is listed up to 50 MHz frequency, to have a filter of low pass of bandwidth of 40 MHz to I and Q. This should mean a total of 80 MHz of bandwidth with I and combined Q,-40 to 40 MHz. Why did the bandwidth to Web pages as OR listed being "40 MHz bandwidth in real time", if the low pass filter of the WBX is 40 MHz in I and Q? Not the band total bandwidth or 80 MHz?

    Q-3: assuming a bandwidth-40 to 40 MHz: could I put the WBX LO at 50 MHz, be tuned to the frequencies from 30 MHz to 20 MHz signal,-20 MHz and use a bandpass filter to the FPGA to extract the new signals and remove all other signals?

    Q 4: I tried to start a FPGA project in LabVIEW and add the x 301/294xR/295xR as a target. Data clock is locked to 120 MHz, which I guess means he will receive no data to 120MS/s IQ? The x 301 Ettus is listed as provide data of the ADC to the FPGA at a rate of 200 ms/s, could someone explain to me why, OR USRP RIO expects only database 120MS/s?

    Hi Idar,

    Yes, you should be able to put the basics on your X 310/USRP RIO and use LabVIEW FPGA to receive 120 MECH. / s of the DACs.  The example I posted is in fact not for the precompiled file bit.  The example I posted is for LabVIEW FPGA, which allows you to add the IP address for the FPGA.  There is a sample project that comes with LabVIEW FPGA which is the recommended starting point to build your FPGA application.  The sample project has all the configuration set up as well as broadcast continuously and pads/FIFOs in the FPGA and examples for synchronization.  There are comments in the code example that show where he must add your own blocks of property intellectual as a filter and decimater you mentioned.  The PDF I posted shows what changes you must make to this sample project using the Remora Basic/LF.

    I would like to know if I'm not explaining this clearly, or if you have any questions, I'd be happy to help you!

  • How to get the rate max one sampling NOR 9263 and other cards?

    Hello!

    I'm using a NI 9263 map and a chassis cDAQ-9172 proyect and im he 8.0 whit CVI programming. IM generating a sine and square waves to do some tests on a radio.

    I want my program to be functional for all cards of this type, and we know that most of the cards have different specifications, for example sampling max tariff, in this case the Pentecost of work NI 9263 100 kech. / s as the maximum. IM generating waves based on the sampling frequency.

    If my program must be compatible with most of the cards, my need to program to acquire max sampling rate using a specific function of NIDAQmx.h.

    Do you know if theres a function or attribute that can return this value?

    I tried this function with different attributes, with no results:

    DAQmxGetTimingAttribute (taskHandle, DAQmx_SampQuant_SampPerChan, & MaxSamp);
    DAQmxGetTimingAttribute (taskHandle, DAQmx_SampClk_Rate, & MaxSamp);
    DAQmxGetTimingAttribute (taskHandle, DAQmx_SampQuant_SampPerChan, & MaxSamp);
    DAQmxGetTimingAttribute (taskHandle, DAQmx_SampClk_TimebaseDiv, & MaxSamp);
    DAQmxGetTimingAttribute (taskHandle, DAQmx_SampClk_Timebase_Rate, & MaxSamp);

    The three first atribbutes gives me the rate real samp which is 1Ks/s (according to me, is the rate of samp set to the default value for all cards you before be initialized for the user), but do not give me samp (100Ks/s) max flow.

    The rest of the attributes only gives me the value of the clk, which is 20 MHz and the divisor of the clk (20000). Also I tried with a card 9264 (max samp rate is 25 ksps / s) and the function returns the same results.

    Any idea?

    Thank you!!

    Hey Areg22,

    I think I've found the service you're looking for:

    http://zone.NI.com/reference/en-XX/help/370471W-01/mxcprop/func22c8/

    This link gives just the syntax for the function, but the following gives you more information about the function:

    http://zone.NI.com/reference/en-XX/help/370471W-01/mxcprop/attr22c8/

    When I used the property of this function node output was 100,000 for the NI 9263. Which is consistent with the plug. I would like to know if it works for you.

    Thank you

    -KP

  • Writing of signals with different sampling to file rate

    Hi all.

    I have a somewhat periodic signal, each cycle having his inerference S.A max, min, mean, RMS ect.

    I like to write * ALL * these data to a single file, with a scale of absolute time, where the cycle data will be stamped with the maximum of the cycle time.

    cycle data columns must be empty at the time as the time of the signal peak value.

    Who's asking 2 questions:

    1. can I build a 'waveform' with a not constant sampling rate where I just stamp each data point arbitrarily? This will help me to get the result I want?

    2. is there a better way to convince labview to write data of the cycle next to the signal, with time stamp correct?

    An example is attached.

    Thanks for the tips...

    10 x Heuter, your example is useful - the problem is resolved.

  • DAQmxCfgSampClkTiming sampling rate for external sources

    I'm looking at the example of Synchronized_AIAO_Shared_Clock.c to http://zone.ni.com/devzone/cda/epd/p/id/2352 .  This example creates a string of tension that HAVE streams at 10 kHz, and then creates a tension AO channel that is bound to the sample clock HAVE to synchronize channels.  I use this example to understand the use of DAQmxCfgSampClkTiming here.  This is the corresponding code (comments are mine):

    Create a channel of tension HAVE will work continuously at 10 kHz

    DAQmxErrChk (DAQmxCreateTask("",&taskHandleRead));

    DAQmxErrChk (DAQmxCreateAIVoltageChan(taskHandleRead,"Dev7/ai1","",DAQmx_Val_Cfg_Default,-10.0,10.0,DAQmx_Val_Volts,NULL));

    DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandleRead,"",10000.0,DAQmx_Val_Rising,DAQmx_Val_ContSamps,1000));

    Create a tension AO channel, and then attach the clock of the chain in tension of the AO for the sample clock HAVE
    DAQmxErrChk (DAQmxCreateTask("",&taskHandleWrite));
    DAQmxErrChk (DAQmxCreateAOVoltageChan(taskHandleWrite,"Dev7/ao0","",-10.0,10.0,DAQmx_Val_Volts,NULL));

    DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandleWrite,"ai/SampleClock",1000.0,DAQmx_Val_Rising,DAQmx_Val_ContSamps,1000));

    .. So what I'm trying to understand here is how to interpret (1000.0) sampling rate argument in the second call to DAQmxCfgSampClkTiming, where the canal AO is related to "AI/sampleClock.  It seems to me that this argument must be meaningless, other than perhaps to determine the size of the buffer, since by definition this AO channel will clock on a sample of every time that AI/SampleClock rises.  Then maybe someone can help me understand how this argument is used...

    But in all cases, the docs say "If you use an external source for the sample clock, set this value to the maximum expected rate of the clock."  In this case, the clock is set up a few lines earlier at 10 kHz, so is not this 'evil' in the second call to DAQmxCfgSampClkTiming, a sampling rate of 1 kHz is specified (much less than the maximum rate of sample expected)?  What is the consequence of this?

    Thank you!

    -Dan

    Hey Dan, some big questions you've got.

    You pretty much put the nail on the head with your guesses. The size of the buffer is based on the resolution of data acquisition in combination with the sampling frequency that you specify. Think of it as an implicit in the size of the buffer declaration (but it is certainly an explicitly define that, if you wish).

    As for your second question, which relates to new back to the size of the buffer, except that this time it is for the use of an external clock source. Given that the material has no implicit way to know the frequency of clock of this external source, it asks you to specify explicitly the maximum frequency so it can create a buffer of the right scale size.

Maybe you are looking for