B & K Portable Signal Analyzer binary data files

I am currently using a portable Signal B & K 2144/7651 Analyzer.  I would like to start a project to display binary data files created by the parser in Labview.  I technical documentation that specifies the formatting of the binaries and they have the code example written for Pascal but I do not know how to translate this to Labview.  If anyone has experience with these data files or could give me a hint on how to interpret the files of Labile, I'd appreciate it.

I can post the technique will have to but I need to scan him.

Thank you

Eric

Hi arvin.

I just wanted to share the work to date - it is not quite finished, but I have to get some sleep.

Will probably end tomorrow/today (later)

See you soon!

Tags: NI Software

Similar Questions

  • How to install the large data file?

    Anyone know how I can install large binary data to BlackBerry files during the installation of an application?

    My application needs a size of 8 MB of the data file.

    I tried to add the file in my BlackBerry project in the Eclipse environment.

    But the compiler could not generate an executable file with the following message.

    «Unrecoverable internal error: java.lang.NullPointerException.» CAP run for the project xxxx»

    So, I tested with a small binary file. This time, the compiler generated a cod file. but the javaloader to load the application with this message.

    "Error: file is not a valid Java code file.

    When I tried with a small plus, we managed to load, but I failed to run the program with that.

    "Error at startup xxx: Module 'xxx' has verification error at offset 42b 5 (codfile version 78) 3135.

    Is it possible to include large binary data files in the cod file?

    And what is the best practice to deal with such a large data files?

    I hope to get a useful answer to my question.

    Thanks in advance for your answer.

    Kim.

    I finally managed to include the large data file in library projects.

    I have divided the data file in 2 separate files and then each file added to library projects.

    Each project the library has about 4 MB of the data file.

    So I have to install 3 .cod files.

    But in any case, it works fine. And I think that there will not be any problem because I use library projects only the first time.

    Peter, thank you very much for your support...

    Kim

  • May not be able to Shrink tables (data files)

    Hi all

    Oracle 10.2.0.3.0 on HP - UX

    I've restored the cold backup of size 1.5 to. All tables are present in a single schema.

    SQL > SELECT SUM (bytes) / 1024/1024/1024 "GB" FROM WHERE user_segments;

    GB
    ----------
    1496.96539

    Application user is past the tables of all the useless lines.

    SQL > SELECT SUM (bytes) / 1024/1024/1024 "GB" FROM WHERE user_segments;

    GB
    --------
    677.87667

    After letting down the paintings, I try to reduce the tables to reduce the HWM. For this, I have the query on all of the tables below.

    change the movement line of table_name enable;

    ALTER table table_name retractable space compact;

    I check with a script available in metalink to analyze the data files how can I reduce the size of the file of data manually

    After running script. I found there is that no boom of the data file cannot be done. When I try to reduce the data file using

    SQL > alter data < datafile > size < size >;

    It throws error as

    ORA-03297: file contains data beyond the requested value of RESIZING

    When I check my size of the database, this is the same I checked with the tables, that is to say 1.5 to.

    Could you please suggest what would be the problem?

    Thank you
    KSG

    Published by: KSG on 15 March 2010 14:01

    Hello

    But I have no space left to create the tablespace or move the tables. Export of more than 1.5 TB of data is not allowed for me to move forward.

    So it will be difficult, but it is not impossible.

    First published as previously you must use option SHRINK SPACE, as you could see in the link below
    in order to free up space in your Tablespace:

    http://www.Oracle.com/technology/oramag/Oracle/05-may/o35tuning.html

    With the request that I posted, you can locate the last block in your data file.

    You can follow the Segment that has this block by changing the query as follows (assuming that your block size is * 8192 * bytes, else you will need to change)
    This value in the query):

    select a.tablespace_name, a.file_name, b.owner, b.segment_name, ceil( (nvl(hwm,1)*8192)/1024/1024 ) "Mo"
    from dba_data_files a, ( select file_id,
                             max(block_id+blocks-1) hwm
                             from dba_extents
                             group by file_id
                           ) b
    where a.file_id = b.file_id(+)
    order by tablespace_name;
    

    Then, MOVE you (or RECONSTRUCTION if there is an Index) the segment corresponding in a different Tablespace.
    And revert to the original.

    Check again where is your last block with the above query.

    And Segment-by-Segment basis, you can move the HWM of the data file and reduce its size without
    use too much space.

    But you need time, so be patient. Time or space, you can choose :-).

    Hope this helps;
    Best regards
    Jean Valentine

    Published by: Lubiez John Valentine on March 14, 2010 10:44

  • How to analyze the data of the cDAQ and Signal Express, especially after analysis?

    In the first series of tests of my instrument, it took longer than expected for the race.  Thus, the data was saved in 6 days.  The file is too large for export to Excel.  At the beginning of the project, I was as ignorant as I could go ahead and add analysis and the scaling of measures.  By the scaling, I mean my data of switching current dew points or whatever it is that I record.

    How to evolve the data to read the output data as expected 4mA = point of dew of-20 C or 0 PSIG?  Can I pre program this to be recognized for each event?

    For real analysis I am doing – I would first analyze the data I recorded and choose different points to send to Excel to graph and analyze.  Is this possible?

    Secondly, I would like to know how to scale and analyze my data in the project without having to do this later analysis in the future?

    I have a cDAQ-9172 with LabVIEW signal Express 3.0 that uses four modules - 9211 2 modules of thermocouple, my 4-20 1-9201 module +/-10V module and 1-9203.

    Thank you for any assistance.

    Hi Patricia,

    "' You can do this by adding a step Load/Save signals ' analog '.  I hope this helps!

  • How to add binary data to a file existing in OSB

    Hello

    I have a project of OSB that I need to do this to add binary data by ftp.  Here's my current throughput:

    Out binary MFL-> replace $body with binary output mfl-> publish to action (business service that is configured for ftp binary data).

    However, when data are Ed the following error is thrown:

    URI = ftp://xxx:21 / opt/home/zzz/logs

    Application of metadata =.

    < xml fragment - >

    " < tran:headers xsi: type ="ftp:FtpRequestHeaders"xmlns:ftp =" http://www.BEA.com/WLI/SB/transports/FTP "xmlns:tran =" " http://www.BEA.com/WLI/SB/transports "xmlns: xsi =":

    ttp://www.w3.org/2001/XMLSchema-instance">

    < ftp:fileName >11802_insert_oh_xfrmr.eai_data< / ftp:fileName >

    < / tran:headers >

    " < tran: encoding = xmlns:tran ' http://www.BEA.com/WLI/SB/transports "> utf-8 < / tran: encoding > .

    " < = xmlns:ftp ftp:isFilePath ' http://www.BEA.com/WLI/SB/transports/FTP "> false < / ftp:isFilePath > .

    < / xml fragment >

    Payload =

    19266787 ^ CLLL ^ C711791 ^ CLLL ^ C ^ C1178213 ^ Phase fixed (1) ^ C63185066204 ^ CA ^ CConstructed ^ C358880 NW 4DR LK MONTAZA ^ C120/240 ^ C09361 ^ CN/A ^ Remove CProposed ^ CAerial ^ C45718100 ^ C

    Unknown ^ C ^ ONC ^ temperature closed ^ CClamp ^ CA ^ C1 ^ C2-cover ^ C19266796 ^ ONC ^ ONC ^ C15 ^ C ^ CYes ^ CYes ^ CYes ^ C13200Y/7620 X 22860Y/13200 ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C

    ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C ^ C19266641 ^ Coh_fuse_switch ^ C63086930001 ^ C4 ^ CN31 ^ CDD0613 ^ C22.9 ^ C63376474601 ^ C8129580 ^ wrong ^ C4 ^ Coke

    echobee ^ C43 ^ C0 ^ Cdefault ^ CYes ^ C

    >

    # < 6 November 2013 2:28:23 pm > < error > < WliSbTransports > < goxsd1604 > < osb_server1 > < ExecuteThread [ASSET]: '3' for queue: '(self-adjusting) weblogic.kernel.Default' > < <

    Anonymous > > < BEA1 4B688443B66FA09FFE75 > < d2b4601b2fffd9b7:6b9f2297:1422a857ee8: - 8000 - 000000000000171 b > < 1383766103889 > < BEA-381105 > < error occurred for the endpo of service

    int: com.bea.wli.sb.transports.TransportException: cannot open the data connection. Message is received error (553) of FTP server [mpsd1] [10.111.19.32] IP response in.

    status of RT [21] [connected] command executing [storopt/Accueil/zzz/logs/11802_insert_oh_xfrmr.eai_data.a]

    com.bea.wli.sb.transports.TransportException: failed to open the data connection. Message is received error response (553) of FTP server [mpsd1] [10.111.19.32] IP [2 port

    1] [Hardcover] status command executing [stor opt/home/icanadm/logs/11802_insert_oh_xfrmr.eai_data.a]

    at com.bea.wli.sb.transports.ftp.connector.FTPTransportProvider.sendMessage(FTPTransportProvider.java:422)

    As I understand the error code 553 represents a wrong file name.  The file name I am providing is 11802_insert_oh_xfrmr.eai_data but it seems that the name is changed to 11802_insert_oh_xfrmr.eai_data.a.  So I did what is obvious and changed the file in several different ways (without the extension .eai_data, removed the number of file name) but still no luck.

    Any suggestions?

    Thank you

    Yusuf

    You can try to use the transport of ftp of jca rather than OSB ftp transport

    See:

    http://docs.Oracle.com/CD/E23943_01/integration.1111/e10231/adptr_file.htm#BABJEFCJ

    You can activate the mode append this

  • Analyze the flat file data in a nested structure.

    This has been driving me crazy all day long.

    I have a flat data file I want to analyze in a nested data structure.

    Small sample data:

    0 HEAD
    1 SOUR FTW
    2 VERS Family Tree Maker (16.0.350)
    2 NAME Family Tree Maker for Windows
    2 CORP MyFamily.com, Inc.
    3 ADDR 360 W 4800 N
    4 CONT Provo, UT 84604
    3 PHON (801) 705-7000
    0 TRLR
    

    If anyone recognizes this, yes it's a small piece of a GEDCOM file.  That's what I'm trying to analyze.  For someone who is not familiar with this data format.  The first number is the level of a data element.  Level 0 are elements of the root of a data segment.  Level 1 lines relate to the data of level 0 line previous closest.  Level 2 lines relate to the level 1 data line that precedes the closest. And so on.

    Here is an example of the desired output, the different elements to the related parent of nesting.

    <cfset foobar = {
     HEAD = {lvl=0,
     SOUR = {lvl=1,data="FTW",
     VERS = {lvl=2,data="Family Tree Maker (16.0.350)"},
     NAME = {lvl=2,data="Family Tree Maker for Windows"},
     CORP = {lvl=2,data="MyFamily.com, Inc.",
     ADDR = {lvl=3,data="360 W 4800 N",
     CONT = {lvl=4,data="Provo, UT 84604"}},
     PHON = {lvl=3,data="(801) 705-7000"}}}},
     TRLR = {lvl=0}
    }>
    
    <cfdump var="#foobar#">
    

    I think I'm looking at a kind of recursive function to embed these data correctly, but I just can't figure out how to do.

    I have this basic function that will display each line of data in a separate structure key

    <cffunction name="parseFile">
         <cfargument name="file" required="yes">
         <cfargument name="line" required="no" type="string" default="">
         
         <cfscript>
              var returnStruct = structNew();
              var subStruct = structNew();
              var cur_line = "";
              var next_line = "";
              var line_lvl = "";
              var line_key = "";
              var loop = true;
              
              if (len(trim(arguments.line)) EQ 0) {
                   cur_line = fileReadLine(arguments.file);
              }
              else
              {
                   cur_line = arguments.line;
              }
              
              do {
                   if (not FileISEOF(arguments.file)) {
                        next_line = fileReadLine(arguments.file);
                   }
                   else
                   {
                        next_line = "-1";
                        loop = false;
                   }
                   
                   line_lvl = listFirst(cur_line, ' ');
                   cur_line = listRest(cur_line, ' ');
                   line_key = listFirst(cur_line, ' ');
                   cur_line = listRest(cur_line, ' ');
                   
                   returnStruct[line_key] = structNew();
                   returnStruct[line_key]["level"] = line_lvl;
    
                   cur_line = next_line;
              } while (loop);
              
              return returnStruct;
         </cfscript>
    </cffunction>
    
    <cfscript>
         gedcom_file = FileOpen(getDirectoryFromPath(getCurrentTemplatePath()) & "Ian Skinner.GED","read");
         /*gedcom_data = {individuals = structNew(),
                        families = structNew(),
                                             sources = structNew(), 
                                             notes = structNew()};*/
                                             
         gedcom_data = parseFile(gedcom_file);
    </cfscript>
    
    <cfdump var="#gedcom_data#" label="Final Output">
    

    I tried many ways to recursively call this function in order to nest the elements.  None of them have produced await in the above example of hand coded output.  Which made me the closest is recursive call, the function parseFile() towards the end of the while loop if the following line is greater than the current level of line:

    if (listFirst(next_line,' ') GT line_lvl) {
         parseFile(arguments.file,next_line);
    }
    


    It works pretty well, as long as the next level of line is the same as or higher than its previous level of the line.  But once the next line level is lower, the recursive call will not return to the appropriate parent level.  The current function call ends just on a loop on the data file.  Everything that I tried to provide a correct output for recursive function calls when the next data line belongs to a line parent just a horribly distorted data.

    Yes, that's exactly it. I think that the node must always be added to the stack.

    I just had a period gave me. But that's what I thought.

    That is to say...

    While (not FileISEOF (gedcom_file)) {}

    line = fileReadLine (gedcom_file);

    extract data from the node

    node = {};

    node.LVL = listFirst (line, "");

    line = listRest (line, "");

    key = listFirst (line, "");

    If (listLen (line, "") gt 1) {}

    node. Data = listRest (line, "");

    }

    Download the most recent ancestor of the battery

    lastNode = stack [1];

    If it is a brother/ancestor, look for its parent

    While (arrayLen (stack) & node.lvl lte lastNode.lvl) {}

    arrayDeleteAt (battery, 1);

    lastNode = stack [1];

    }

    Add to the stack

    arrayPrepend (stack, node);

    Add this node from its parent

    lastNode [key] = node;

    }

  • binary data from GPS VI-example RF recording / reading with NI USRP

    Hello

    In the demo video (http://www.ni.com/white-paper/13881/en) a ublox was used to record the GPS signal while driving. How is it possible to record with you - Center in a binary data format which is usable within LabView for the reading of the GPS signal? Ublox uses the *.ubx data format, is there a converter?

    Hello YYYs,

    The file was generated not by uBlox but by recording and playback VI.  An active GPS antenna, fueled by some amplifiers and mini-circuits was related to the USRP and the program created LabVIEW file (USRP being used as a receiver)

    Later the USRP is reading the file (generation) and the Ublox GPS receiver is to be fooled into thinking that its location is currently somewhere else.

  • Split a large data file

    Hello:

    I have a large .dat file that contains multiple groups of data. I tried the Import Wizard, but it is only able to analyze all the data channels (columns). How to create a data plugin that is capable of breaking a large file into several groups of data?

    Example Structure:

    Comments and header information

    Group info

    Group info

    Group info

    Channels

    Data...

    Group info

    Group info

    Group info

    Channels

    Data...

    and it repeats.

    My goal to have in separate groups is to import each group of data in the form of sheets in an excel file.

    Hello stfung,

    Please find attached a draft for the use. Download the file "URI" on your computer and then double click on it. This will install a use called "ModTest-text file.

    The major section of the Plugin meta-data handling. reading data from the signal into groups is the smallest part.

    If you are interested in this part of the script is in a function called "ProcessSignals(oFile,oGroup)".

    Please let me know if the plugin works for you.

    Andreas

  • Cannot edit or import external data files in Excel

    Hello

    I use a monthly report in Excel that works fine in Windows but not Mac. It is a binary XLSB file that pulls in the 4 files external .txt from various places on the web to load the most recent data in the report. If I open the file in Windows, it asks if I want to allow data and when I click Yes, it downloads data from each txt file, does some calculations and is all ready. In the Mac it is said that it cannot find the files.

    So my first thought was to go to data > Connections and check the properties, mainly the url that it uses to pull in each file. However, in Excel for Mac (Office 365) 16 properties button is grayed out. This means also I can "save under" when comes a new month and change the URL to match the new month either. You can bring up a properties box, but it is very limited, with no option to change the url.

    I also tried in Excel 11, same problem. I do not want to Boot Camp and have to load Windows (and use up to 20 GB) just for a single file, but I need to use it. Any ideas?

    Richard

    Rekrmend wrote:

    Hello

    I use a monthly report in Excel that works fine in Windows but not Mac.

    Then you should post your question on the forums Microsoft Mac because it is their software, you are having problems with, and that's where the office gurus hang out.

    http://answers.Microsoft.com/en-us/Mac

  • How the names of variables and units used in the binary output file

    My colleague will give me LabView generated from the binary files (*.dat). There are more than 60 variables (columns) in the binary output file. I need to know the names of variables and units, which I think he has already configured in LabView. Is there a way for him to produce a file that contains the name of the variable and unity, so that I'll know what contains the binary file? It can create an equivalent ASCII file with a header indicating the name of the variable, but it does not list the units of each variable.

    As you can tell I'm not a user of LabView, so I apologize if this question makes no sense.

    Hi KE,.

    an ASCII (probably the csv format) file is just text - and contains all data (intentially) written to. There is no special function to include units or whatever!

    Your colleague must save the information it records the names and values in the same way...

    (When writing to text files, it could use WriteTextFile, FormatIntoFile, WriteToSpreadsheetFile, WriteBinaryFile even could serve...)

  • binary data loss

    I am running a VI which each loop saves a 1 d array to a binary file. I leave the loop run thousands of times, but after I sent the binary data in .xls format, I noticed that it was down the last few hundred loops (because I know that data should have looked like). Basically I was running a wave form and it seemed as if when I let the waveform 4 times, cycle 3 present you would Excel in. Is there a reason for this?

    The fixed! I used the skeleton to read a binary file for the example for my vi section and the skeleton was equal to 8 bytes of data size. Apparently, mine is 4 bytes? and so 8 was originally think that it there was only half because the amount of data that it has been. I changed and all the data is there! Thank you!

  • Cluster of savings in the form of binary data

    I have a compound cluster of ~ 50 items of different types that I need to save it as binary data. Obvoiusly I could do with the method of "brute force" of 50 different entries in the file, but that seems silly.

    Is there an easier way to empty just the disk in the form of binary data?  By registering as a cluster seems to add extra characters.

    In a world of LV, I just recorded in as a variant, but it must be saved as binary raw in order to be read by another program as a C structure.

    There is the beautiful Boolean entry in the binary write for "add/array of strings of size."  Unfortunately, this boolean applies only to the type of upper-level data that is being written.  So whatever it is inside the cluster will always these lengths prefixed.  Therefore, you have to separate your data and write each piece individually.

  • Handling of binary data (TCP/IP)

    Hello

    I was wondering if someone could point me in the right direction of processing continuous data TCP/IP and more meaningful information. Data contains 'an array of 10 tanks. For the sake of the argument, I saved some data in a text file. I tried to use the typecasting and unflatten to the chain function to convert the data to ascii in real time, but my lack of knowledge with the conversion of binary data makes it very difficult.

    Luckily, I got to convert binary data stored in a table of numbers of single precision that displaying the correct values. However, I am still confused in doing so in real time. I have attached all of the data and the example vi.

    Thank you

    Sam

    After hours of searching on forms.ni.com and I was able to convert table 10 floating point values readable in LV values as drjdpowell has pointed out that using modern "Unflatten chain" is much easier to accomplish this task. Because I had to deal with the old version of LV (7.1), I had to use cataloged with loop I32 array type, For, Swap, Swap bytes words and catalogued with array type SGL. Please refer to join .vi for more details.

  • Executable, create additional folders for oscilloscope and Signal Analyzer

    Hello world

    I am facing problem while creating executable. I use 8.x layout.

    I'm talking about signals and oscilloscope monitor in my application. When I create an executable of my application, I found in the target folder 2 additional folders with the names "Series Agilent MXA" and "Tektronix TDS 300 series '... These are the drivers that I use in my application.

    I used extended in other projects, but it doesn't create any folder there... I use 8.x available there too.

    Any guess on why this guy is create a few extra folders. Please see the attached image showing the preview of the executable file.

    Mmmmmmmm... his party...

    what I've done is... I replaced initialize.vi and close.vi of the reach of the driver that has been installed using the installer instead of the folder project style... and I did to reach only... but now executable doesn't create any extra folder... I don't know how is the signal Analyzer thing went...

  • Generate the analog waveform based on the data file

    I want to create an analog voltage output that follows I have a data file (excel, csv, text (which is easy)).  The data file creates a waveform with equal time between steps (dT =.0034 sec).  After the output through all the data points, I want it repeat indefinitely.

    What is the best way to create the waveform of a data file?

    To create a type of waveform data, calculate the dt by subtracting two values in column 1 and get the array of Y from column 2. If you save the file as a comma separated or tab text file, you can then use the spreadsheet file read. After obtaining a 2D array, you would use the index table and the subset of table functions.

    Assuming you use a capture card data OR for the output signal, you can pass a type of waveform data to a writing DAQmx and set for the generation of types.

Maybe you are looking for