Best practice for simultaneously run two loops (read + write) in sync

Hello

I thought I had a work program that could write and read simultaeously with reasonable accuracy, because it is perhaps not as reliable that I discovered the thought (he sometimes gets out of sync/jumps steps).

Is there a good guide, method tutorial or recommended to do with reliability, in sync and effectively?

Should I use the notifier? and/or timed loops?

I have a loop of Scripture, which sends steps to a certain type of equipment (stepper motor or magnet etc.) to not more than 100 Hz.

I have a loop of reading, that reads data from a detector (photodiode, the signal is dependent on the stage of writing), no more than about 1000 Hz (the playback loop runs faster than the loop of writing)

I then pass information from one loop to the other (a single loop wrote to a group, the other readings of him).

I then save data to a file by saving the content of the chart.

Thank you

Jeff_Tech wrote:

(1)... in sync...

(2) ... (the playback loop runs faster than the loop of writing...)

These two statements are not a contradiction?

There is important information that you have left behind, for example are - it enough for both in phase or is it also necessary that they occur at a very steady pace.

What the two depend on each other? It is an application of control where the output depends on the entry in some way or is it simply an excitation/response pair.

Are you single point of software timed acquisition?

Should it operate for long periods of time or he just a scan done every time?

In any case, are probably the right thing is to run the two hardware timed the same clock. Configure the AO output array and arm the AI, then start the two running of the same clock (for example, you might bind the conversion of CEW AO clock). Read the data in the buffer to HAVE at your leisure in a loop which is a not any special time.

Can you show us a code so we get a better sense of what you are doing?

Tags: NI Software

Similar Questions

  • Best practices for network configuration of vSphere with two subnets?

    Well, then I'll set up 3 ESXi hosts connected to storage shared with two different subnets. I configured the iSCSI initiator and the iSCSI with his own default gateway - 192.168.1.1 - targets through a Cisco router and did the same with the hosts configured with its own default gateway - 192.168.2.2. I don't know if I should have a router in the middle to route traffic between two subnets since I use iSCSI ports linking and grouping of NETWORK cards. If I shouldn't use a physical router, how do I route the traffic between different subnets and use iSCSI ports binding at the same time. What are the best practices for the implementation of a network with two subnets vSphere (ESX host network: iSCSI network)? Thank you in advance.

    Install the most common iSCSI would be traffic between hosts and

    the storage is not being routed, because a router it could reduce performance.

    If you have VLAN 10(192.168.1.0/24) iSCSI, VLAN 20 (192.168.2.0/24) ESX

    MGMT and VLAN 30 (192.168.3.0/24) comments VMs and VLAN 40 (192.168.4.0/24)

    vMotion a deployment scenario might be something like:

    NIC1 - vSwitch 0 - active VMK (192.168.1.10) MGMT, vMotion VMK (192.168.4.10)

    standby

    NIC2 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC3 - vSwitch 2 - active VMK1 (192.168.1.10) iSCSI

    NIC4 - vSwitch 2 - active VMK2 (192.168.1.11) iSCSI

    NIC5 - vSwitch 1 - current (VLAN30) guest virtual machine port group

    NIC6 - vSwitch 0 - MGMT VMK (192.168.2.10) standby, vMotion

    VMK (192.168.4.10) active

    You would place you on VLAN 10 storage with an IP address of something like target

    192.168.1.8 and iSCSI traffic would remain on this VLAN. The default value

    gateway configured in ESXi would be the router the VLAN 20 with an ip address of

    something like 192.168.2.1. I hope that scenario help set some options.

    Tuesday, June 24, 2014 19:16, vctl [email protected]>

  • Just improved m tips on best practices for sharing files on a Server 2008 std.

    The field contains about 15 machines with two domain controllers, one's data is the app files / print etc...  I just upgraded from 2003 to 2008 and want to get advice on best practices for the establishment of a group of file sharing. Basically I want each user to have their their own records, but also a staff; folder. Since I am usually accustomed to using windows Explorer, I would like to know if these actions can be done in the best conditions. Also I noticed on 2008 there is a feature of contacts. How can it be used? I would like to message or send an email to users their file locations. Also, I want to implement an admin at a lower level to handle the actions without making them far in on the server, not sure.

    I read a certain bbut I don't like test direct more because it can cause problems. So basically a way short and neat to manage shares using the MMC, as well as the way that I approach their mail from the server of their actions. Maybe what kind of access cintrol or permissions are suitable also for documents. Also how can I have them use office templates without changing the format of the model.

    THX

    g

    Hello 996vtwin,

    Thank you for visiting the Microsoft Answers site. The question you have posted is related to Windows Server and would be better suited to the Windows Server TechNet community. Please visit the link below to find a community that will support what ask you:

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

    Hope this helps J

    Adam
    Microsoft Answers Support Engineer
    Visit our Microsoft answers feedback Forum and let us know what you think

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Best practices installation CARS in two areas of data center?

    Data Center has two distinct areas.
    In each area, we have a storage system and a rac node.

    We will install RAC 11 GR 2 ASM.

    For data, we want to use diskgroup + DATA, normal redundancy mirrored on two storage systems.

    CRS + vote, we want to use diskgroup + CRS, normal redundancy.
    But for diskgroup CRS + vote with normal redundancy, there are 3 LUNS and we have only 2 storage systems.
    In my view, that the third logical unit number is necessary to avoid situations of split brain.

    If we put two LUNS to storage lun #1 and the other for storage #2, what will happen when storage faills #1 - it means that two of the three disks for diskgroup + CRS are inaccessible?
    What will happen when all the material in the #1 box fails?
    Is the human intervention: at the time of the failure, which #1 area rises again?

    Is there a best practice for configuration 2-zone 2-storage cars?

    Joachim

    Hello

    With regard to the vote of the files are concerned, that a node must be able to access more than half of the files with the right to vote at any time (simple majority). In order to be able to tolerate failure of n files in vote, need at least 2n + 1 configured. (n = number of files with voting rights) for the cluster.
    The problem in a stretch cluster configuration, is that most of the facilities use only two storage systems (one on each site), which means that the site that hosts the majority of voting records is a potential single point of failure for the entire cluster. If the storage or the site where the files of n + 1 vote is configured fails, the entire cluster will go down, because Oracle Clusterware will lose the majority of the files.
    To avoid a complete cluster failure, Oracle will support a third vote file on a cheap lowend, standard NFS mounted device somewhere in the network. Oracle recommends the file NFS vote on a dedicated server, which belongs to a production environment.

    The white paper below allows you to accomplish:
    http://www.Oracle.com/technetwork/database/Clusterware/overview/grid-infra-thirdvoteonnfs-131158.PDF

    Also with regard to the configuration of the vote and OCR (11.2), when you use ASM. How they should be stored?
    I recommend that you read:
    {message: id = 10028550}

    Kind regards
    Levi Pereira

  • Best practices for dealing with Exceptions on members of storage

    We recently encountered a problem where one of our DistributedCaches was closing himself and restart due to a RuntimeException is thrown from our code (see below). As usual, it's our own code and we have updated to not throw a RuntimeException in all circumstances.

    I would like to know if there are some best practices for Exception handling, other than catching Exceptions and their record. We should always catch the Exceptions and ensure that they do not spread back to code that is running from the pot of consistency? Is it possible to configure consistency so that our DistributedCaches are not completed even when filters custom and other throw RuntimeExceptions?


    Thank you, Aidan


    Exception below:

    2010-02-09 12:40:39.222/88477.977 Oracle coherence GE < error > 3.4.2/411 (thread = DistributedCache:StyleCache, Member = 48): a (java.lang.RuntimeException) exception occurred reading Message AggregateFilterRequest Type = 31 for Service = DistributedCache {Name = StyleCache, State = (SERVICE_STARTED), LocalStorage = active, PartitionCount = 1021, BackupCount = 1, AssignedPartitions = 201, 204 = BackupPartitions}
    2010-02-09 12:40:39.222/88477.977 Oracle coherence GE < error > 3.4.2/411 (thread = DistributedCache:StyleCache, Member = 48): DistributedCache ending because of an exception not handled: java.lang.RuntimeException

    We have reproduced you problem in the House and it looks like a global filtering does not
    the correct thing (i.e. having caught a processed) when moving from a runtime exception.
    In general runtime exceptions should simply be exploited and returned to the application
    without compromising the cache server, so we will be solving it.

  • Best practices for business rules

    Our business rules have
    Difficulty ([cost center])

    to extract Center cost of the user of its form to make it work faster.

    What is the best practice for running the same rule of business, but for all Center cost? Will it put this business rule in a menu somewhere and let him invite users to manually type "Cost Center" so that the business rule treats all cost centers?
    Thank you.
    David

    We always use the method that John described. Having two versions of the same business rule. Accepting a guest of execution for the "user" of the business rule version. Another fixed on all the level zero for the 'admin' of the business rule version.

  • I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates.

    Thank you for taking the time to read this. I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates. I thought I would do a clean install of Windows XP, install my Microsoft Works again and nothing else. I would like to effectively transforming my computer into a word processor. He continues more and more slow. I get blue screen errors, once again. I received excellent Microsoft Support when it happened before, but since my computer is around 13 years, I think it is not worth the headache to try to remedy. I ran the Windows 7 Upgrade Advisor, and my computer would not be able to upgrade. Please, can someone tell me how to make it only a word processor without updates or internet connection? (I already have a new computer with Microsoft Windows 7 Home Premium, it's the computer that I use. The old computer is just sitting there and once a week or so I updates.) I appreciate your time, thank you!

    original title: old computer unstable

    http://Windows.Microsoft.com/en-us/Windows-XP/help/Setup/install-Windows-XP

    http://www.WindowsXPHome.WindowsReinstall.com/sp2installxpcdoldhdd/indexfullpage.htm

    http://aumha.NET/viewtopic.php?f=62&t=44636

    Clean install XP sites
    You can choose which site to reinstall XP.

    Once it is installed, then you do not have to connect what anyone, however, some updates may be required to perform the work, test this by installing work and see if you get an error msg. Except that you should be fine.

  • What is the best practice for a 'regular' Server VMware and VDI environment?

    What is the best practice for a "regular" VMware Server and VDI environment?   A single environment (ESXi and SAN) can accommodate two if it is a whole new configuration?  Or even better to keep separate?

    Enjoying inputs.

    Quick and dirty answer is that "it depends."

    serioulsy, it depends really two things budget and IO.  If you had the money for two without then buy two and don't have to host your server environment and the other for VDI desktop, their IO profiles are completely different.

    If this is not the case, try to keep each type of use for their own dedicated LUN.

  • Best practices for vSphere 5 Networking

    Hi all

    Given the following environment:

    (1) 4 physical servers, each server has 16 (Gigabit) network interface cards will install vSphere 5 std.

    (2) 2 switches with SAN storage battery function

    (3) 2 Equallogic PS4000 SAN (two controllers)

    (4) 2 switches for the traffic of the virtual machine

    As for networking, I intend create some vSwitches on each physical server as follows

    1 vSwitch0 - used for iSCSI storage

    6 network adapters are associated with IP-hash, multitracks with iSCSI Storage consolidation policy. and storage load balancing is round Rodin (vmware)

    (vmware suggests to use 2 cards for 1 traget of IP storage, I'm not sure)

    2 vSwitch1 - used for the virtual machine

    6 Teamed network adapters for the traffic of the virtual machine, with the hash intellectual property policy

    3 vSwitch2 - management

    2 network cards are associated

    4 vSwitch3 - vMotion

    2 network cards are associated

    You would like to give me some suggestions?

    Alex, the standard set by the storage and VMware is used by dell for their servers and tested sound on their equipment and the publication of the document, it is recommended...

    This sound is the best practice for dell server with this model mentioned in the document to use.

    Hope that clarifies...

  • Best practices for tags

    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached... What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.
    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.
    OR
    You have a better option?

    Kind regards
    Fateh

    Fateh says:
    Hello

    In the bundled applications Tags are used in most applications. For example. in App Customer Tracker, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I've pre-defined tags real estate (Real Estate) in a table of research called TAGS. For example, Full floor, furnished, equipped, duplexes, attached...

    These seem to be two different use cases. In the bundled applications tags allow end users to join free-form metadata to the data for their own needs (they are sometimes called "folk taxonomies"). Users can use tags for different purposes or different tags for the same purpose. For example, I could add 'Wednesday', 'Thursday' or 'Friday' tags customers because these are the days that they receive their deliveries. For the same purpose, you could mark the same customers '1', '8' and '15' by the numbers of road trucks making deliveries. You can use 'Monday' to indicate that the client is closed on Mondays...

    In your application you assign to the known properties of predefined attributes. It is a model of standard attribute 1:M. their view using the metaphor of the label is not equivalent to the user of free-form tags.

    What is the best practice for tag properties:
    1 - to store these tags in a varchar column in the table of PROPERTIES using the Shuttle box.

    If you do this, how can you:

  • Search for furnished duplex properties effectively?
  • Change in the world "mounted" to "integrated"?
  • Ratio of the number of properties, broken down by full floor, double-sided, equipped...

    OR
    2. to store in a third table Eg, PROPERTIES_TAGS (ID PK, FK property-ID, TAG_ID FK), then use the LISTAGG function to show tags in one line in the report properties.

    As Why use Look up Table, shows the correct way to proceed. It allows the data to be indexed for efficient extraction, and issues such as those above should be dealt with simply by using joins and grouping.

    You might want to examine the possibility of eliminating the PK ID and use an index table organized for this.

    OR
    You have a better option?

    I'd also look carefully your data model. Make sure that you're not flirting with the anti-pattern VAE. Some/all of these values are not simply the attributes on the property?

  • Best practices for the compression of the image in dps

    Hello! I read up on best practices for the compression of the image in dps and I read the asset from the source of panoramas, sequences of images, Pan and zoom images and audio skins is resampled not downloading. You will need to resize them and compress them before deleting the in your article, because the dps do not do it for you. Hey can do!

    So Im also read as he active source of slideshows, scrolling images, and buttons ARE resampled as PNG images. Does this mean that DPS will compress for you when you build the article? Does this say I shouldn't worth going bother to resize these images at all? I can just pop in 300 DPI files 15 MB used in the print magazine and dps will compress their construction article - and this will have no effect on the size of the file?

    And this is also the case with static background images?


    Thanks for your help!

    All images are automatically resampled to based on the size of the folio you do. You can put in any image resolution you want, it's not serious.

    Neil

  • Best practices for managing strategies of path

    Hello

    I get conflicting advice on best practices for managed paths.

    We are on version 4.0 of ESXi connection to a HP EVA8000. Best practices guide HP recommends setting the strategy of railways handle on Round Robin.

    This seems to give two active paths to the optimized controller. See: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf

    We used certain consultants and they say that the best practices of Vmware for this solution is to use the MRU policy which translates a single path to the optimized controller.

    So, any idea what good practice is best practice? Does make a difference?

    TIA

    Rob.

    Always go with the recommendation of the storage provider.  VMware recommendation is based on the characteristics of the generic array (controller, capable ALUA failover methods, etc.).  The storage provider's recommendation is based on their performance and compatibility testing.  You may want to review their recommendations carefully, however, to ensure that each point is what you want.

    With the 8000, I ran with Round-Robin.  This is the option of creating more robust paths available to you from a failover and performance point of view and can provide performance more even through the ports on the storage controller.

    While I did of the specific tests/validation, the last time that I looked at the docs, the configuration of HP recommends that you configure each IO to the ports in the switch configuration.  This adds the charge to the ESX host, the switch to other ports, but HP claims that their tests showed that it is the optimal configuration.  It was the only parameter I wondered in their recommendation.

    If you haven't done so already, be sure to download the HP doc on configuring ESX and EVA bays.  There are several parameters that you must configure the policy path, as well as a few scripts to help make the changes.

    Virtualization of happy!

    JP

    Please consider awarding points to useful or appropriate responses.

  • Best practices for CentOS (5.3)

    Hello

    I'm looking to run CentOS 5.3 in an ESX 3.5 / 2.5 environment VI. What selection is considered to be best practice for this OS during the virtual machine creation? Should I put it for Red Hat Linux Enterprise 5 or should I do another 2.6.x - 32 bit?

    Thanks in advance,

    Tim

    CentOS wants to be 100% binary replica of upstream. For all purposes useful, removed its RHEL SRPMs with illustrations and a few added CentOS extra packages.

    At chose RHEL vm and the rpm RHEL for tools to use. One thing you can do is change the /etc/init.d/vmware-tools and change the order of start/stop. After all this time, I don't know why they put K08vmware-tools when K90network that causes things like postfix to do not stop...

    HTH

  • best practices for placing the master image

    Im doing some performances / analysis of load tests for the view and im curious about some best practices for the implementation of the image master VM. the question is asked specifically regarding disk i/o and throughput.

    My understanding is that each linked clone still reads master image. So if that is correct, then it seems that you would like the main image to reside in a data store that is located on the same table as the rest of the warehouses of data than the House related clones (and not some lower performing table). The reason why I ask this question, it is that my performance tests is based on some future SSD products. Obviously, the amount of available space on the SSD is limited, but provides immense quantities of e/s (100 k + and higher). But I want to assure you that if, by putting the master image on a data store that is not on the SSD, I am therefore invalidate the IO performance I want high-end SSD.

    This leads to another question, if all the linked clones read from the master image, which is general practices for the number of clones related to deploy by main image before you start to have problems of IO contention against this single master image?

    Thank you!

    -


    Omar Torres, VCP

    This isn't really neccissary. Linked clones are not directly related to the image of the mother. When a desktop pool is created and used one or more data stores Parent is copied in each data store, called a replica. From there, each linked clone is attached to the replica of the parent in its local data store. It is a replica of unmanged and offers the best performance because there is a copy in every store of data including linked gradient.

    WP

Maybe you are looking for

  • Monitor issues Qosmio F60 health after BIOS update

    Hello Then I updated my BIOS F60 of 2.00 today to 2.10 Before the update Toshiba PC health monitor said that the fan would turn around 9% at idle. Idling the monitor now says it's 50%. Thing is I can tell by listening to the fan is not spining more i

  • FIXED/SOLVED! V570 not initialize / start-up

    So, like many owners V570, I met the same problem where the laptop won't boot any more. I'm not not on warranty anymore so I couldn't send Lenovo or another. From what I've read, this problem appears when the laptop goes to sleep (close the lid when

  • "1 MHz" for the name of the source on the time loop does not work on LabVIEW Real-time 9.0

    A Loopis Timed running as expected if a time source is manually chosen in the dialog box of configuration for "1 kHz" (first digit) and "1 MHz" (second digit). If the time Source name is set through the input of the channel "1 kHz", the loop is execu

  • Wireless bluetooh upgrade dv7 - 3067cl

    I want to update my card internal DV7 - 3067cl for a capacity of bluetooh wireless.  My current wireless card in the laptop is a Qualcomm Atheros 5009 801.11 b/g/n PCIe mini card.  What would be a good card compatible with Wireless-N & Bluetooh on a

  • Player Flash of blackBerry Smartphones

    Hello! I am new... I have problems to download flash player. Can someone help me? TKS.