precision of timestamp

When Labview run on a pc (and not a system in real time like a CRIO), what is the accuracy of the LabVIEW release timestamp function?  I want to assure you that I understand that, when interpreting the data for a project, because I'm the time is broadcast by the timestamp with 6 digits of precision to fractions of a second (i.e. 2,0592060 seconds, it is an output).  I am trying to acquire data at a rate of 300 Hz, so it would be good to know if at least that the timestamp data are accurate to 3 decimal places for a second (2,059 seconds).

Hi WyoEng,

If you store values the time stamp (to get Date/time in seconds) generates in a table (if one ignores redundant data during picking too quickly) and then check the difference between the two, you will see that the result will be of 1 ms.

Try to save the timestamp faster than 1000 Hz and you will see that you will get the same data.

What equipment do you use the Acquisition?

Is - this material timing? or the moment of software?

I think that the accuracy of the numbers that you see is due to the way that LabVIEW stores the data type Timestamp-

You can see in the LabVIEW help - (see the following link - in the section Time Stamp)

How LabVIEW stores data in memory

http://zone.NI.com/reference/en-XX/help/371361J-01/lvconcepts/how_labview_stores_data_in_memory/

"

Timestamp

LabVIEW stores a timestamp as a cluster of four integers, where the first two signed integers (64-bit) represent the number of area — independent of the full time seconds that have elapsed since 12:00 a.m., Friday, January 1, 1904, universal time [01/01/1904 00:00:00]. The two unsigned integers (64-bit) represent fractions of seconds.

"

Tags: NI Software

Similar Questions

  • Use Labview Timestamp in C++

    Hello

    I have to synchronize two software to 20 ms of precision, the timestamp of labview first single use (128bits, 1904 ect...) and cannot be changed.

    and the second is written in C++ using DAQmx, I find the trick of subtracting the number of seconds of a struct tm classic.

    But it's not accurate enough for me.

    The only solution I found, is to use Structure SYSTEMTIME and use the same round as the struct tm.

    But I do find it very nice, so is it possible to use the same routine as labview in a classic C++ program (or cvi classic)?

    Thanks in advance!

    Eric

    Hey Eric-

    I don't know if you are still working on it, but I thought I would mention the time CVI API absolute in the library of utilities.  It uses the Format binary time of National Instruments, which I think is what should use LabVIEW and should meet your needs.

    NickB

    National Instruments

  • every meter of precise timestamp 6608 with gps, irig-b

    We have a timer/counter 6608 which forms the basis of an astronomical photometer. We have signals TTL entering five channels and an IRIG-B gps clock. After a lot of help of NOR, we now have loops that have precisely timed durations (our ' integration time'). The problem is our GPS a timestamp. Although we have a GPS timestamp for each of these 'integration time', the actual times of these events (which, if the integration time was 1 s must increment one second at a time) to walk around +/-0.75 s. We can just figure out how to get this timestamp to reflect reality. Loops bang of the (correct) counts per second, but the clock cannot follow (if she does sometimes some integrations in a row). We need someone to associate the correct to the integration period beginning at a specific time (we understand the limitations at the ms level harware).

    see you soon,

    Tom harrison

    Hi, Tom Harrison.

    There is an article in the knowledge base that presents a solution to this specific question: PXI-6608 not recognizing IRIG-B time signals GPS

    Things to check:

    (1) make sure your signal IRIG-B is the type "DC Level.

    (2) make sure that your IRIG-B signal is compatible with the PXI-6608.

    (3) make sure that fix the synchronization signal on the line of real-time clock synchronization.

    For more information about the synchronization by GPS with the 6008, read through this knowledge base.

    I hope that you are having a great day!

  • Display of precision for the timestamp interactive report

    Hi all

    I have converted to date of 'old' data type to "timestamp (1) with the local time zone" in the database tables.

    When I view these fields in interactive report, it appears with a precision of digits 6 (oracle default), but when I use the command SQL under APEX it display them with an accuracy of only 1 digit under the column of database definition.
    APEX always see these columns as a DATE type.

    Anyone know how to view these columns as they stored in the database? (i.e. precision of 1 digit)
    and how can I change time zone session inside the Apex to check this new columns works correctly?

    Thank you

    Published by: whYMe on February 3, 2010 11:38

    Hello

    -Change page
    -Interactive report (report attributes)
    -Edit the icons for the particular column
    -Number/Date format

    It looks like you want: DD-MON-YY hh. FF1 AM

    Mike

  • ORA-1866 when you try to change the precision of the timestamp column

    Hello

    I use Oracle XE. I have a table that contains a Timestamp column. "Fractional precision" is 6 and "Time zone" has the value of HOURS LOCAL time ZONE.
    If I try to change the precision of the timestamp column (set to 2 for example), I get the following error:

    Error ORA-1866: the dattime class is not valid.
    The following SQL statement failed:
    ALTER TABLE CLIENTS
    CHANGE ("TIME_STAMP" TIMESTAMP (2) WITH THE LOCAL TIME ZONE)

    Please notify.
    Thank you
    MR. R.

    The column is empty? It should be just by reducing the precision.
    Try to run the command in the SQL Plus tool if you have not already done.

    The following should do the job

    ALTER TABLE CLIENTS ADD TEMPCOL TIMESTAMP (2);
    UPDATE CUSTOMER SET TEMPCOL = "TIME_STAMP";
    ALTER TABLE CLIENTS REMOVE THE COLUMN "TIME_STAMP";
    ALTER TABLE CLIENTS RENAME COLUMN TEMPCOL TO "TIME_STAMP";

    Concerning

    Published by: thriller on April 7, 2009 17:22

  • Precise timestamp on the graph of the band with scrolling

    I have a user interface with a set of synchronized graphics that operate in mode scrolling stripchart.  The elevator of the x-axis is visible on one of the cards, and the operator has the ability to take a break from the update of this table (essentially locking the entrance to the chart) and scrolling in a bit of history.  The width of the graph is about one minute of data, the number of points has been set to allow about an hour to scroll of history.  Other maps which are synchronized with the main chart have the x-axis property nodes attached, so that they follow the scrolling of the main graphic.

    Everything works fine with the current configuration, except for one small detail: the timestamps.  I put date and time stamping visible on the x-axis of the main graphic, so that operators know exactly when any aberration in the data actually took place.  I have seen a few entries in how add real timestamp, but none of them seemed to work properly.  They work very well on a standard sight, but fail miserably once the scrolling action is activated, so I have to do something wrong.

    Any suggestions on the best way to get this accomplished timestamp?  It's absolutely crazy to me how much pain is to put a timestamp to the real real world in a graphic...  As someone who constantly defends LabVIEW against colleagues who claim that it is "too difficult" to use it, it's kind of embarrassing when a thing so simple becomes so complicated in LabVIEW!

    The short answer is that this is impossible with a graphic if you add the requirement to be able to take a break. The reason is simple - a chart stores data on its own, but it saves all the values of X - you give only values Y and for the X values he simply uses the index of the value and the most you can do is set a t0 and delta t for the X scale. This works normally, but does not work when you stop feeding data to the chart, because the value of X is not stored. I heard someone say once a waveform graph does not allow this, but I've never looked into it and I'm not sure that's true.

    What you can do is use a graphic instead of a chart - in a graph, you provide values X and Y for each point, so you can have absolute time for the x-axis values. The key point is that, to a chart, you must provide all the data to draw, you must maintain a circular buffer of the data yourself. You can do this by using a queue with loss, when you preview the queue to get the data, but there are also some examples online, as well as in the finder of the example, if you search for 'XY Chart'.

  • a timestamp correlated

    Hello Forum,

    My previous post is left unanswered , and so I stumbled through on my own. My goal was to perform time correlated anticipated spectroscopy. I use a 6321 X series DAQ card, a Perkin Elmer SPCM and a pulsed laser. I'm exciting a sample with a laser and wish to obtain a precise timestamp of the first photon detected off the laser. I then repeat the measure to build a histogram from where I can get the life of my sample (~ 1 micosecond).

    As I understand it, the counter on the X series cards cannot retriggered. My solution is to reset the counter with the falling edge of the signal modulating the laser then timestamp that the first photon after it (attached). Right now my program gives me the timestamps of all photons in the buffer zone, but I want only the first. Is there a way simple and elegant to get just get the first element in the buffer on each stop without slowing down the Data Acquisition loop?

    Thanks in advance,

    R

    A few changes of vi...

    > Is not enough for me just remove the second stop event structure in the loop of the consumer.

    It is preferable to one producer in a loop. consumer is also control loop that is to handle the controls of the usr.

  • Incorrect writing to the file timestamps

    Hi all

    I wanted to familiarize yourself with writing data and associated timestamps in a file to verify the rate at which samples are played back in my system. As a little test I wrote a simple VI that travels 5 times and creates 5 sine sampling points. Each point has its timestamp captured and converted to seconds and fractions of a second. After the for finishes in loop iteration it writes the data of sample (line by line) for every 5 samples with their associated timestamps.

    I imposed a 1ms delay for each iteration and hoped to see the consistency between the timestamps of the consecutive samples but sometimes they are very or even identical to the previous timestamp which doesn't make any sense for me. I tried with wait times and it seems to be more precise between samples, but this result is intriguing.

    Example:

    31.209159       0

    31.209159 84

    31.209159 91

    31.224784 14

    31.224784 -76

    I chose not to use custom file VI writing because I had the same problems and thought that it could have better results.

    Hoping someone can clarify it or show me where I'm wrong. I have attached the VI below.

    Thank you.

    If you are using a hardware device timed, as NO hardware DAQ, you then get accurate timestamps. Everything else is a limitation of the Windows operating system. You can always switch to LabVIEW RT if you need more specific expectations.

  • timestamp to the number function

    The range of functions and Conversion screw has a function that converts the number of seconds passed a time stamp (To Time Stamp Function).  I need to go the other way, a duplicate timestamp (number of seconds since midnight, Friday, January 1, 1904 universal time).  Someone at - it a function to do this?  I tried to type cast without success.

    Simply use the function Of Double precision Float . It is polymorphic and will accept a timestamp to return the number of seconds since the LabVIEW 'zero time '.

  • How can timestamp in LVM file - I access the ms value?

    Hi all,

    I have a file measurament and when I explain it, I need timestamp with millisecond precision, but I can't read in labview!
    sample file:
      
    Channels 4
    Samples of 5000 5000 5000 5000
    Date 23/04/2010-2010/04/2010 to 23/04/2010 to 23/04/23
    Time 12:40:59.828125 12:40:59.828125 12:40:59.828125 12:40:59.828125
    X_Dimension time time time time
    X 0 3.3548640598213959E, 9 3.3548640598213959E, 9 + 9 + 9 3.3548640598213959E 3.3548640598213959E
    Delta_X 2.000000E - 7 2.000000E - 7 2.000000E - 7 2.000000E - 7
    End_of_Header *.

    in labview I can read up to the second value either! Why?


  • Unable to replicate the frations seconds when you read a timestamp to a binary file

    I use LabVIEW to collect packets of data structured in the following way:

    cluster containing:

    DT - single-precision floating-point number

    timestamp - initial period

    Table 2D-single - data

    Once all the data are collected table of clusters (above) is flattened to a string and written to a text file.

    I try to read binary data in another program under development in c#. I have no problem reading everything execpt the time stamp. It is my understanding that LabVIEW store timestamps as 2 unsigned 8-byte integer. The first integer is the whole number of seconds since January 1, 1904, 12: 00, and the second represents the fractional part of the seconds elapsed. However, when I read this information in and convert the binary into decimal, the whole number of seconds are correct, but the fractional part is not the same as that displayed by LabVIEW.

    Example:

    Hex stored in the binary file that represents the timestamp: 00000000CC48115A23CDE80000000000

    Time displayed in LabVIEW: 8:51:38.139 08/08/2012

    Timestamp converted an Extended floating-point number in labview: 3427275098.139861

    Timestamp binary converted into a decimal number to floating-point in c#: 3427275098.2579973248250806272

    Binary timestamp converted to a DateTime in c#: 8:51:38.257 08/08/2012

    Anyone know why there is a difference? What causes the difference?

    http://www.NI.com/white-paper/7900/en

    The least significant 64 bits should be interpreted as a 64-bit unsigned integer. It represents the number of 2-64 seconds after the whole seconds specified in the most significant 64-bits. Each tick of this integer represents 0.05421010862427522170... attoseconds.
    

    If we multiply the fractional part of the value (2579973248250806272) by 2-64 so I think that you have the correct time stamp in C.

  • Scan of timestamp string

    Hi all

    A minor question about the timestamps and analysis of the chain. I needed to convert a string to a timestamp and found the solution I needed to this post: link

    I wanted to do a bit more stripped and noticed that when I assembled the formatted string it would not recognize the output Assembly in the form of stamp - it comes out a double. The inputs of channels are identical. The double precision output converted into a Timestamp to give the correct value - so I have a work around.

    I was just curious to know if it is a minor issue with LabVIEW or is it an element of intent? Or y at - there some step I'm missing to force my output in a Timestamp.

    Thank you

    Dave

    When you wire up a string of digitization of the string constant, the function can self-enroll ' adapt to the entrance and you get the appropriate automatically output.   With a string of format generated programmatically, you use the default entry (the circle with the dot) by default, DBL.  Wire just a constant Timestamp, or at the present time, this entry.

  • Resolution of timestamp in DSC

    I know that the Citadel is a timestamp resolution of 100 nanoseconds... but I also heard that LV DSC has a resolution of only 1 millisecond.

    I know I could understand it with a little test, but I was wondering if the following is true:

    1. If you open a Citadel using LV DSC data session, the default resolution for all the timestamps is 1 MS, that means that LabVIEW DSC-generated timestamps are of 1 ms accuracy.

    2. However, if you build your own device VI server, you can specify your own stamps. These timestamps can be precision up to 100 ns?

    You are right. The time resolution for ODBC is 10 ms, DSC Tag engine is 1 ms and the Citadel, it is 100 ns. Of course you will be limited by the lowest resolution. When retrieving data through ODBC you won't get a 10ms resolution even if data has been saved with a resolution of 1 ms.
    If you create a vi server it will continue to use the tag engine and thus limited to 1ms resolution.

  • Analysis of chain to get the Timestamp, LV 2012 problem

    I have a VI I wrote in 2013 LV where it works very well.  But with the same VI in LV 2012SP1, analysis of string to get a Timestamp does not work.  Attached is the VI recorded in LV2012 SP1.

    Background:

    I work with an FPGA where I am synchronizing pulses using the clock of 40 MHz.  I connect these impulses in a file on my host PXI. To correlate the clock on the FPGA with a log file and real life someday, I named the file with a timestamp, and the meter 64-bit encoded hexadecimal string.  Now, I have a pretty good correlation between when the 64-bit against him and the time of day to facilitate the search of the data file.

    Now, I try to analyse the file name, so in my analysis of code I can reassociate the value 64 bits with the time of day using the string functions to separate the file name.  With a few regexes (which I've never used in real life before), I've broken down the chain and got the base time as a timestamp, and the counter value as a 64-bit integer.  The VI works very well in 2013 LV.

    But when I saved it to the LV 2012, analysis of string to get the timestamp fails.  No error, but the timestamp shows 19:00 12/31/1599 (I am GMT - 5).  I have no idea why.  A bug in the parsing of the string that was set in 2013 LV?

    Please run the attached VI in LV 2012.  Confirm if it fails for you.  Then open it in 2013 LV and see if it works for you.

    I am trying to make it work in LV 2012 because this project is locked in this version of LV for now.  I can probably do something to reorganize the time string to get something that will scan a digital time stamp if I have to.

    Hi Bill,

    Sorry to be a bit late on this point, I just came across this thread after the hunt for the CAR for a separate issue. The fix for 300375 CAR is what causes the difference in behavior between versions of LabVIEW. It appears on the list of known issues of LabVIEW 2013, although the description focuses more on the fixed number than other possible differences between the versions.

    The thoughts in this thread died on what happened in the difficulty. LabVIEW 2013 now pick up the string of year 2 or 4-digit number when you use the %y or %Y tags, respectively. Previously, the behavior was more forgiving in the year format, leading to incorrect behavior that you have observed strings of 4 digits in the year the %y tag. The original problem, you pointed out (2012 incorrectly string manipulation of tagged %y (2 digits) 2-digit year is precisely what has been fixed in the CAR.

    Regarding your strange result over the years with the tag %Y-2 digit, this is a limitation of the type of data Time Stamp itself. The year will always be converted to between 1600 and 3000. So no time extreme travel journaling. Yet.

    See you soon,.

  • TIMESTAMP and TIMESTAMP WITH TIMEZONE

    Pretty simple scenario. A table with 2 columns, without constraints. Column of TS is the TIMESTAMP data type. Column TS_TZ is the TIMESTAMP WITH TIME ZONE data type. Accuracy for both is 4. I'm studying explicit data type conversions. In order to enhance my understanding, I'm experimenting with the insertion of string literals:

    INSERT INTO TEST_DATA_TYPES (ts, ts_tz)

    VALUES (19-SEPT-14 02:24:14 ', 19-SEP-14 02:24:14 ');

    The attempt above raise 01840. 00000 - 'entry not long enough for the date format value.

    INSERT INTO TEST_DATA_TYPES (ts, ts_tz)

    VALUES (19-SEPT-14 02:24:14 ', 19-SEP-14 02:24:14 ');

    INSERT INTO TEST_DATA_TYPES (ts, ts_tz)

    VALUES (19-SEPT-14 02:24:14 ', 19-SEP-14 02:24:14.0');

    Two inserts above work. Note that I changed was the chain of ts_tz. In one, I added 'AM' and in others I added a fraction of a second. He was the first attempt fail because Oracle needs some sort of explicit indication where precision ends before the name of the region can be added?

    Hello

    2679789 wrote:

    ... In order to enhance my understanding, I'm experimenting with the insertion of string literals:

    INSERT INTO TEST_DATA_TYPES (ts, ts_tz)

    VALUES (19-SEPT-14 02:24:14 ', 19-SEP-14 02:24:14 ');

    ...

    It is very important to understand that you should not use a string where other data type (for example, a TIMESTAMP) is expected; She simply asked in trouble.  Even if make you it work properly today, you can do something next week, as apply a patch, that will not not have to work.  Use the conversion functions explicitly, like TO_TIMESTAMP, iwhen, you must.  Never rely on implicit conversions.

    Learn exactly why a given implicit conversion may not be a good way to invest your time.

Maybe you are looking for