Graphical waveform data points, the performance impact to no.

Hello

I searched the forum, but I have only partial information yet about this behavior of graphics WF.

I have a TAB control on the façade, and a TAB page contains a WaveformChart. I traced points with speed of 0.5 Hz, and I have 8 (curves) plots on the chart.

I would like to have a lot of history, so I put the length of the graphic history to 44000.

Everything works as expected, but I see some sluggish behavior, when I click on another TAB page and return to the page of the TAB where the table.

In this case, the appearance of the graph takes about 1 to 2 seconds. This isn't a big problem, since the user typically controls the last minutes of ~ 10 (X-autoscale deactivation and change left "border" time). When this small amount of data points are visible on the graph, the new TAB page is fast after the click of the mouse. When several hours of data is presented, it's slow.

I guess the main reason for this behavior, it is that, when I switch back to the graphics TAB page, the OP system has to re - draw a large amount of data points, and it takes a lot of time?

I'm curious what is the 'best practice' in such a scenario? Shell I store data in a shift register and use an XY graph (I actually have data points such as the double and the corresponding time stamp, there are small fluctuations in time, so I need all THE timestamps)? Would it be useful? So I could add the new XY data point in the array in the register shift, and I redraw the graph at each new stage?

Thanks for the tips!

I don't know if this applies to your situation, but sometimes the LV refreshes no lights front panel which are not visible on a tab page that is not in front. Table has its internal buffer, but I have no idea how them redraws is managed at many points to accrue though not.

A graph redraws the data are written on it because you have to write all the data each time each time. With SHIFT register approach the graph would only see the most recent data when it becomes visible, so it seems that it needs to be adapted.

Another thing: your chart or table has no 44000 pixels along the x axis. LabVIEW will reduce the number of pixels to display the data. That takes some time too. Using the registry approach change, you can manage the data introduced to the 500-1000-2000 pixels will display your chart. This eliminates the need for BT to do and you can order the method: on average, decimate, sliding window and so on.

I'd go with the registry to offset and graph.

Lynn

Tags: NI Software

Similar Questions

  • The performance impact on the size of the CHM file

    Is there any performance impact depending on the size of a CHM file?

    Main issues people with the help file performance (whether a CHM file) are related to the number of images, the hotspots DHTML, bookmarks, and links they have in a topic. The number of subjects in a CHM shouldn't be a problem. What exactly you are trying to access the performance impact?

  • Measure the performance impact of a PowerCLI report on vCenter

    How will I know how much of an impact on the performance my powerCLI report will have on the performance of my Server vCenter?
    If I run a report for example which attracts a large number of events of vCenter Server and sorts through them, is that the processing and memory usage will be mainly on the virtual machine, I am under the report of?  How much charge it will put on vCenter itself?  (I see use RAM up substantially on the virtual machine, I am running the report)
    Thank you!

    Just leave out the MaxSamples parameter

  • get data on the performance of the virtual machine

    Hello community,

    I am trying to obtain performance data from a virtual machine using the following code:

    //create start & stop time 
    var end = new Date(); // now
    var start = new Date();
    start.setTime(end.getTime() - 3600000); // 1h before end
    //create a querySpec for one entity
    var querySpec = new Array();
    querySpec.push(new VcPerfQuerySpec());
    querySpec[0].entity = VM.reference; //set entity of workflow VM
    querySpec[0].startTime = start;
    querySpec[0].endTime = end;
    //creteate PerfMetricID for one metric
    var PM = new VcPerfMetricId();
    PM.counterId = 2;
    PM.instance = "";
    var arrPM = new Array();
    arrPM.push(PM);
    querySpec[0].metricId = arrPM; //assign PerfMetric to querySpec
    querySpec[0].intervalId = 20;
    querySpec[0].format = "csv";
    
    var CSV = VM.sdkConnection.perfManager.queryPerf(querySpec);  // query PerformanceManager
    System.log (CSV);// show if type is OK
    //show properties
    System.log (CSV.entity);
    System.log (CSV.value);
    System.log (CSV.sampleInfoCSV);
    System.log (CSV.dynamicProperty);
    

    The workflow is valid and functional. But there is no data. Here's the log produced by the workflow:

    [2010-11-12 12:12:53.742] [I] DynamicWrapper (Instance) : [VcPerfEntityMetricCSV]-[http://class com.vmware.vim.vi4.PerfEntityMetricCSV|http://class com.vmware.vim.vi4.PerfEntityMetricCSV] -- VALUE : com.vmware.vim.vi4.PerfEntityMetricCSV@beb35d25
    [2010-11-12 12:12:53.742] [I] undefined
    [2010-11-12 12:12:53.742] [I] undefined
    [2010-11-12 12:12:53.742] [I] undefined
    [2010-11-12 12:12:53.742] [I] undefined
    

    I also tried .format querySpec [0] = 'normal '; -same result.

    Where is my fault? I'm on the right track?

    Pls support me in obtaining data on performance, Thx.

    -


    Kind regards, Andreas Diemer

    visit http://www.vcoteam.info & http://mighty-virtualization.blogspot.com/

    SERVUS Andreas!

    I think I got to the next step:

    perfManager.queryPerf (...) returns a table, not the CSV values itself. Therefore, pop() data in the result table.

    A change in your code that returns a large amount of data in my lab:

    var CSVArr = VM.sdkConnection.perfManager.queryPerf (querySpec);  query PerformanceManager

    System.log (CSVArr) ;// show if the type is OK

    view properties

    var CSV = CSVArr.pop ();

    System.log (CSV.entity);

    System.log (CSV.value);

    System.log (CSV.sampleInfoCSV);

    System.log (CSV.dynamicProperty);

    But I do not know why the System.log (CSVArr) does not show the array as a type... (maybe because there is only one item!)

    Edit: found the answer:

    ... - VALUE: com.vmware.vim.vi4.PerfEntityMetricCSV@229650a8 versus

    ..... VALUE: com.vmware.vim.vi4.PerfMetricSeriesCSV@5d7f26

    "Once again learned!"

    BTW: I found the idea with the table on the slide 59 of this presentation:

    http://communities.VMware.com/servlet/JiveServlet/download/1371233-29453/vSphereAPI_PerfMonitoring.PDF

    Hope this helps

    See you soon,.

    Joerg

    PS: How can you add not formatted 'verbatim' sourcecode-style here in the forums?

  • KB973687 - msxml3.dll msxml6.dll - services.exe uses excessive virtual memory, the performance impact on the first logon after restart

    Since the installation of fix KB973687, I had several SP2 and SP3 systems exhibit behavior that makes them unusable until after the first logon is completed, which can take up to 20 minutes.   I've identified the patch (KB973687) and DLLs, that it will update the origin of the problem, but uninstalling the patch does NOT return to normal operation.

    I need to understand how to upgrade these systems WindowsXP SP2 and SP3 to restore normal operation, reinstalling Windows, programs, and settings is an expensive solution.

    The performance problem is caused by services.exe slowly consumes about 1.5 GB of virtual memory, and then slowly releasing.  This seems to be triggered by the first logon after restart, this connection is very slow, the screen is blank for most of it, there might be failures of allocation memory during logon.  Once complete this opening of session and memory usage returns to normal levels, recording and return to work normally as do other operations until the system is restarted.

    Spent a lot of time working with SysInternals Process Explorer, trying to find what specific service might be involved, lightweight system for bare essential services with no luck.

    KB973687 seems to offer only two files msxml3.dll and msxml6.dll.  Uninstalling this patch, resettlement V3 and V6 of the XML parser fail to restore normal operation.

    Not all systems seem to have place still restrict the differences.  Systems that are appear to be the oldest, with Windows XP has been installed for at least a year, installed Microsoft Office and Adobe Acrobat.

    Looking for these forums and the Internet, I believe that many have encountered this problem, but have not is it this level of analysis, seem most attribute it to a virus, I see several start explorer.exe manually, I didn't know all the alternatives before reinstalling Windows.

    Found the solution, the following has been fixed in System Cleaner of Comodo 2.2.129172.4:

    "For some strange reason, after changing some settings of the system with the CSC LastGood.tmp Directoy began to constantly be read from my hard drive. This would occur up to about 90 to 99% of my memory was used and then stopped, begin to free the memory, and the system began to slowly to function normally.
    I used the process explore from sysinternals to help diagnose the problem with any process other than services.exe using memory.

    I used sysinterals filemonitor to see LastGood.tmp directory has been read repeatedly.
    After you have uninstalled CSC the problem has been resolved. »

    Even with the effort to find the solution, it was better to reinstall.  Hope this solution helps others.

  • Graphical waveform data to excel

    Hello

    I have the waveform CQI continues with time on the x-axis and the Amplitude on the y-axis. I want to logg in excellent for seconds of each device and data data in Excel must add to previous data. Eventually, two columns, we're at the moment and other for amplitude for 10 seconds. Some help me plz.

    First create an Excel CSV file - open Excel, choose delimited for save as type CSV comma and the name of the file to anything you want.

    It might be easier to suggest how to export data in high file if we could see your VI. You can follow the following steps to get two columns of data in Excel exported from labview.

    (1) create table to 2 dimensions for your y and x values by using the array build function.

    (2) wire the output of the array build at the entrance of meashrement vi writing 2d data. Thread at the entrance to the path to the CSV created as shown above.

    (3) wire the constant true to add to the entry of the file to write to measure vi.

    (4) make sure that it is not a broken arrow to run arrow and code is complete and all the required inputs are connected properly.

    She's. Run the vi that data should be exported to csv file that you created previously.

    It may be useful

  • impact on the performance of the vmware tools

    We had discussions about the performance impact of the installation on a virtual machine vmware tools.

    Are their any comparison between a system running vanilla versus a running vmware tools installed?

    Obviously, I think that the system will go faster with vmware drivers installed.

    Tools will only help IO performance (not counting usually do things like graphics and synchronization times work better).  If your virtual machine has no important networking or storage needs, then you probably don't need the tools.  I'm usually not worth.

    If you want very good performance network without tools, make sure you use a device virtual e1000.  You can define ethernet0.virtualDev = "e1000".  This is not quite as good as real vmxnet (or new vmxnet3) but is much better than the default vlance. If you regularly push 1Gbit or more real traffic to your virtual machines, I would consider it.

    Paravirtualized SCSI is fairly new, but since I have seen that there is a performance gain enough points of reference.  But even once, most likely you need not, unless you use a disc very heavy VM I/O, such as an Oracle database server.

    If you consolidate underutilized physical machines who never use 100% CPU/network/disk, then Tools are probably a waste of time.  But if you want as close as possible to performance native and CPU utilization during intensive i/o, then Tools are worth.

    As for Redhat don't support our drivers for VMware tools, I don't see why this would practically be a problem-if you have any question/crash that you think may be related to our tools, you can always uninstall and go back to where you were before.  If you have an unrelated question, and you are worried Redhat refused to help with tools, you can do the same thing.  So what's the harm in trying them?

    ^^ If you found this post useful, please consider buying me a beer some time

  • Generate the analog waveform based on the data file

    I want to create an analog voltage output that follows I have a data file (excel, csv, text (which is easy)).  The data file creates a waveform with equal time between steps (dT =.0034 sec).  After the output through all the data points, I want it repeat indefinitely.

    What is the best way to create the waveform of a data file?

    To create a type of waveform data, calculate the dt by subtracting two values in column 1 and get the array of Y from column 2. If you save the file as a comma separated or tab text file, you can then use the spreadsheet file read. After obtaining a 2D array, you would use the index table and the subset of table functions.

    Assuming you use a capture card data OR for the output signal, you can pass a type of waveform data to a writing DAQmx and set for the generation of types.

  • Is it possible to filter against data Points?

    First post, so please let me know if I'm not following the correct format.

    I have a report to the discoverer who shoots in 2 data points, the first is of $ gross turnover and the second is of $ turnover. For each data point, we have 2 different columns, one for direct sales and a column for the indirect sales. The report ends by displaying 4 columns of data.

    Gross direct sales | Indirect gross sales | Net direct sales | Indirect net sales

    I want to filter the report to show only 2 columns of data Indirect gross sales & Direct sales.

    What is the best way to apply some type of filter to the data points? I tried to create a named calculation, but I was not able to produce the desired results.

    Thanks for any help you may be able to provide.
    I tried to search, but I was not able to locate a similar question.

    Looks like you want to have 2 columns of 4 columns based on the data.

    If so, what you were doing was OK IMO you want to create 2 calculations with each calculation using the CASE statement.

    However, make sure that in the condition that you always keep it bring back the original 4 data columns.

    On the first assumption, you might want to use the concept of this all about the entire CASE statement.

    So, something like:

    1 calc_direct

    SUM (CASE source WHEN = "Direct" THEN END Net sales REST 0)

    2 calc_indirect

    SUM (CASE source WHEN 'InDirect' = THEN any OTHER gross sales 0 END)

    Make sense?

    Russ

  • move the data points on the waveform display

    Hello

    No idea why the waveform display to my VI has shift the data points? The third tab of the screen does not display this problem. I took a video of what happens to better illustrate what I'm talking about and attached my VI to identify the problem.

    I am currently a sample of data from a DAQmx and storing in a queue until data backup button. Everything works well, but the problem is the display of actual data, which, although sense points, it shows move in a small area around where they are drawn in the first place. It happens even when I erase the visible data and restart the collection of data, as shown in the video.

    Thanks for any help you may be able to provide.

    Guy

    The amount of data you draw? If you draw a lot more datapoints that there are pixels on the x-axis of the graph, you can get peaks that appear to move because of graphic interpolation LabVIEW has to do.

    Mike...

  • When zoomed in on a waveform graph, how can I get all of the data points that is currently displayed on the graph?

    I use X-zoom tool on the graphic palette. In this chart, the x-axis are time. Thus, for example, if I have 30 seconds displayed on the x-axis of the complete graph, and I want to zoom in on the Middle 10 seconds, how can I get the axis y data points that correspond to this average 10 seconds?

    Similar to Cory's suggestion, could you use the X - Scale-> range-> the Min and Max properties to retrieve the appropriate data?

    Maybe even link your sweater of data for the range of scale change event?

  • Put the data of the set point in a waveform graph

    Hello

    In a graph of measure, which is the result of a waveform, I would add data of the set point as well. How can I add this data to set for the chart value?

    In a control loop system, the output is controlled by a labview program. I would like to see the point of this system as well as in the graph, these thresholds are changed at random times, so I don't know how to create a waveform of these data. Should I create a waveform of the setpoint, or y at - it another option to show the set value in the chart?

    See you soon,.

    Rolf

    That can make it much more difficult.

    At one point an array of 1000 points need to be assembled (obviously). How do depends greatly on your structure. Synchronize the time of the two signals can be very difficult. It would be easy if the set value only changed once a cycle of 1000 point.

    You may want to read the SP as an analog input, so you get 2 samples of the material here. In this way the two will always be synchronized. But I'm done alleady assumption, like this, the signal is an analog input (and the target value of one analogue output). What equipment do you use?

    I think always need you a loop to get data and a loop to set the SP. The two loops will be more parallel execution GLSL and synchroniseent. The trick is to synchronize, or for the deterministic timestamps in two loops. If you can make that happen, you 80% there. You could do loops timed loops (using the same clock). Then you get the timestamps in two loops. The MS loop has the queue changes, pushing the value and time. Then the loop of data can use values and hours to create examples of sp 1000.

  • What is the impact of the use of a variant on the performance data type, speed memory applications etc.?

    This is one of my sons "allows to get this settled once for all".

    I avoided data types variant when it is possible to track the performance of my apps. Some observatsions, I made over the years, I believe that;

    (1) operations in place cannot carry on variants.

    (2) in the way of a Variant to a sub - VI (regardless of the terminal on the connector of the icon) are always copied.

    I would like confirmation or correction of the foregoing order of we know better this animal we call LabVIEW.

    Thank you

    Ben

    Ben wrote:

    This is one of my sons "allows to get this settled once for all".

    I avoided data types variant when it is possible to track the performance of my apps. Some observatsions, I made over the years, I believe that;

    (1) operations in place cannot carry on variants.

    (2) in the way of a Variant to a sub - VI (regardless of the terminal on the connector of the icon) are always copied.

    I would like confirmation or correction of the foregoing order of we know better this animal we call LabVIEW.

    Thank you

    Ben

    I check I can pass a Variant to a Subvi with a copy, but it is still impossible to do something (scaling limit controls etc.) with a Variant without first copying it in a new buffer the conversion 'of the Variant.

    Thus,.

    For large sets of data, the variants are a bad idea.

    Ben

  • Impacts on the performance of the attributes from the features of data model design

    I'm trying to understand the implications of the performance of two possible data model design.

    Here is my structure of the entity:

    Global > person > account > option

    Generally, when running, I instantiated a person, a single accountand five option's .

    There are various amounts determined according to the age of the person who should be assigned to the correct option.

    Here are my two designs:

    Design a

    attributes on the entity of the person :
    age of the person
    its option 1 amount
    its option 2 amount
    its option 3 amount
    its option quantity 4
    its option 5 amount

    attributes on the option endity:
    amount of the option

    support table rules:
    option = amount
    its option 1 amount if the option is number 1
    its option 2 amount if the option number 2
    its option 3 amount if the option number 3
    its 4 option amount if the option is number 4
    its option 5 amount if the option is number 5

    Two design

    attributes on the entity of the person :
    age of the person

    attributes on the entity of the option :
    amount of the option
    of the option option 1 amount
    of the option option 2 amount
    of the option option 3 amount
    of the option quantity 4
    of the option option 5 amount

    support table rules:
    option = amount
    of the option option 1 amount if the option is number 1
    option 2 amount option if the option number 2
    of the option option 3 amount if the option number 3
    the option amount 4 If the option is number 4
    option 5 option amount if the option is number 5

    Given two models, I can see what looks like an advantage for a design that, when running, you have less attributes (6 on retirement member + 1 on each of the 5 options = 11) as two Design (1 on retirement members + 6 on each of the 5 options = 31), but I'm not sure. An advantage to design two might be that the algorithm must do less through the structure of the entity: the table of rules support everything for the amount of the option option.

    Anyway there is a table of rules to determine the amounts:

    Design a
    its option 1 amount =
    2 if age = 10
    5 if age = 11
    7 if age = 12, etc..

    Design two
    of the option option 1 amount =
    2 if age = 10
    5 if age = 11
    7 if age = 12, etc..

    Here, it seems that the one would have to cross over the structure of the entity for the design of two.

    The design will have a better performance with a large amount of rules, or it would make a difference at all?

    Hello!

    In our experience, just think about this kind of stuff if you were dealing with 100's or 1000 instances (usually through ODS). You have a very low number, the differences will be negligible, as you should (in general) go with the solution that is most similar to the material of origin or the understanding of the business user. Also, I guess that's an OWD project? Which may be even better, the inference is performed gradually when new data are added to the modules, rather than in a 'big bang' as ODS.

    It seems that the model 1 is the easiest to understand and explain. I wonder why you have the option at all entity, because it seems to be a relationship to one? If the person cannot have only a single amount of option 1, option 2 amount etc, and there's only ever going to be (up to) 5 options... is this assumption correct? If so, you can keep just like the attributes at the level of the person without the need for bodies. If there are other requirements of an instance of the option then, of course, use them, but given the information here, the option feature doesn't seem to be necessary. It would be the fastest of all :-)

    Whatever it is, that the number of instances is so low, you should have nothing to fear in terms of performance.

    I hope this helps! Write back if you have more info / questions.
    See you soon,.
    Ben

  • dispplaying data on the graph of waveform inside/outside while loop

    I create a vi using the random number generator, entering the number in the function(express>>arithmateic>>maths>>trig>>sine) fishing and connect the output of the function sine waveforms. Table of waveform show no problem. If I replace the with graphic waveform table, I get an error that the source type is differenct type of sink. I then put waveform chart outside loop everything hoping that tunnel would act as a table, but still I get the same error. I then put build table palette between all border and loop waveform graph which is placed outside the while loop. I get no error, but no data is displayed on the graph of a waveform. Theoretically, if I press stop I would see a distorted sine wave on the waveform graph, but this doesn't seem to be the case. I am wondering how to view data on the graph of a waveform in such cases!

    Thank you in advance for reading and help!

    See you soon

    First of all, you can take a part of the basis of LabVIEW tutorials.

    Since you need to work with a chart and it is the preferred method to display data point by point, I don't know why you try to use a chart. In any case, you cannot use all simply a table of generation because that would be just the result of the last iteration and your graph indicates it is a single point. If you activate autoindexing, then you get all the values, but not before the end of the loop. If you were to use a shift register and the build dashboard, you might place the graphic inside the loop, but then you would face performance issues that the table would grow uncontrollably.

Maybe you are looking for

  • AirPlay Audio to Airport Express

    I tried several times to send audio from Airplay (Macbook air to appleTV) to my Airport Express without success. When content comes from ATV audio express airport works correctly on my wireless configuration. BUT When audio is sent from my mac via ai

  • Satellite Pro A100 - PSAACE: Installation of BT is Frost

    PROBLEM WITH BLUETOOTH! When I had XP everything was OK, but one day, this icon on the system tray began to be red (disable Bluetooth) I tried to turn on using FN + F8 but it doesn't help.I tried to reinstall the application "Bluetooth Stack" but eac

  • How to find forgotten password

    I was a little too much to drink the night and was a few sites that I'd not been on so I changed my password to logon so that my wife could not see.  Well guess what the next day I don't remember the password darn! Im running on the side of comments

  • 150

    I have an Officejet 150, my Samsung S5 won't come looking for the booth to connect to the printer. I downloaded HP eprint and photoshare or connects to the printer.

  • Get several errors in the hard drive and the slowness of the system XP

    COMPAQ Evo N600c, has a lot to a lot of errors on the hard drive. Slow to load, slow to respond to commands, sites will freeze in the middle of the operation. A way to clean the system of mistakes without having to spend money (amount X). ?  Ran, Ant