use the cyclic data record

Hello

Before you begin, let me briefly introduce what I the material and what I want to achieve.

I got the chassis SCXI-1000 (chassis), SCXI - 1102C (module), SCXI-1303 (terminal block,

mainly for the entry of temperature sensor); A few months ago I bought crossed SCXI-1180 and SCXI-1302 terminal to my digital input signal (see attached PNG file). In fact, I have two digital signals at the entrance. The first digital signal (enter as PFI0) Injection that will trigger the start of the recording is my

Temperature profile; The second is entry signal (such as PFI1), that I want to use as a trigger of reference to stop recording and save the wrapping cycle.

And then the injection cycle keeps going. Please see the attached VI for more details.

I get my digital triggering working to start recording my profile of temperature sensor. However, I can not get the second part trigger to stop recording.

The error of the execution of my VI, is a trigger of reference applies to finishes of sampling. It seems I need continuously to record my temperature profile sampling! Thus, by browsing my VI, please give some suggestions for the task and solutions to solve the conflictions for sampling.

Your help is highly appreciated.

HW


Tags: NI Hardware

Similar Questions

  • How to extract data using the xml data type

    Hello
    I tried the following example using the xml data type, but not the desired output.
    could you please correct the query in order to obtain the necessary
    CREATE TABLE TEST.EMP_DETAIL
    (
      EMPNO       NUMBER,
      ENAME       VARCHAR2(32 BYTE),
      EMPDETAILS  SYS.XMLTYPE
    )
    Insert into EMP_DETAIL
       (EMPNO, ENAME, EMPDETAILS)
     Values
       (7, 'Martin', XMLTYPE('<Dept>
      <Emp Empid="1">
        <EmpName>Kevin</EmpName>
        <Empno>50</Empno>
        <DOJ>20092008</DOJ>
        <Grade>E3</Grade>
        <Sal>3000</Sal>
      </Emp>
      <Emp Empid="2">
        <EmpName>Coster</EmpName>
        <Empno>60</Empno>
        <DOJ>01092008</DOJ>
        <Grade>E1</Grade>
        <Sal>1000</Sal>
      </Emp>
      <Emp Empid="3">
        <EmpName>Samuel</EmpName>
        <Empno>70</Empno>
        <DOJ>10052008</DOJ>
        <Grade>E2</Grade>
        <Sal>2530</Sal>
      </Emp>
      <Emp Empid="4">
        <EmpName>Dev</EmpName>
        <Empno>80</Empno>
        <DOJ>10032007</DOJ>
        <Grade>E2</Grade>
        <Sal>1200</Sal>
      </Emp>
    </Dept>
    '));
    I need to get the record for Empid = '2'
    Then tried the following query with no expected o/p
    SELECT a.empno,a.ename,a.empdetails.extract('//Dept/Emp/EmpName/text()').getStringVal() AS "EmpNAME",
         a.empdetails.extract('//Dept/Emp/Empno/text()').getStringVal() AS "EMPNumber",
          a.empdetails.extract('//Dept/Emp/DOJ/text()').getStringVal() AS "DOJ",
          a.empdetails.extract('//Dept/Emp/Grade/text()').getStringVal() AS "Grade",
          a.empdetails.extract('//Dept/Emp/Sal/text()').getStringVal() AS "Salary",
          a.empdetails.extract('//Dept/Emp[@Empid="2"]').getStringVal() AS "ID",
          a.empdetails.extract('//Dept/Emp[EmpName="Coster"]').getStringVal() AS "CHK"
         FROM emp_detail a 
         where empno=7  
               AND a.empdetails.existsNode('//Dept/Emp[@Empid="2"]') =1
    Thank you...

    Karthick_Arp wrote:
    I'm not very good at that... But if your XML code should not be more like this

    SQL> Insert into EMP_DETAIL
    2     (EMPNO, ENAME, EMPDETAILS)
    3   Values
    4     (7, 'Martin', XMLTYPE('
    5    
    6      1
    7      Kevin
    8      50
    9      20092008
    10      E3
    11      3000
    12    
    .. cut ..
    

    Why? It is perfectly valid to data as attributes rather than elements and also quite common for key values.

  • Android APP and Native application using the same push record ID

    Hello

    We have an Android existing converted app we use for BB10 platform that we are in the process of gradually. Replace this app is an application of BB10 Native with the same name and features.

    My question is that I have to get a new record ID to push for the new Native application or can I use the existing production record ID?

    Ideally, I would like to migrate my users off the coast of the old version of the application to the new with the least problems for customers.

    Thank you

    Andrew

    You should be able to use the same ID for this record.

  • Cannot use the bb data components

    Hello, I am a newbie. I recently moved from QtQuick to Cascades, now I'm trying to do some simple applications to start. I want to use DataSource {point in my application. I imported bb.data 1.0 in the main of QML, added bb/data/source data file in the file c ++ main. But when I try to use the qml data source, it gives me the following error: 'value of the default property of 'controls' Type mismatch. Waiting for bb::cascades:Control and found the source of data. ».

    Can someone hepl?

    The question is do you have set the DataSource as the first element in the container, without putting it in the attachedObjects. The container has a 'default' property that is a list of type Control and DataSource is not a subclass of control. Verify the data source examples in the docs and it should show how to use there... inside an attachedObjects: list [].

  • Good way to use the concurrent data store

    Hello

    I'm developing a multithreaded C++ application that uses the C++ of Berkeley DB Library.

    In my case, I have several databases that I composed in a wooded area. It is important for me to use an environment because I need control over the cachesize parameter.

    I don't need no guarantee of transaction and have for most of the readings, so I decided to use the "simultaneous database.

    I first pre-fill all databases with a number of entries (configuration single-threaded phase) and then work on it at the same time (for most readings, but also insertions and deletions).

    I tried all kinds of different configurations, but I can't work without specifying DB_THREAD as a flag of the environment.

    I don't want to because then access all handles is synchronized sequentially according to the documentation:

    "... Note that the activation of this indicator will serialize calls to DB using the handle between the threads. If

    simultaneous scaling is important for your application, we recommend handles separate for each thread opening

    (and do not specify this indicator), rather than share handles between threads. "

    (Berkeley DB QAnywhere C++)

    So I tried to open the environment with the following indicators:

    DB_CREATE | DB_PRIVATE | DB_INIT_MPOOL | DB_INIT_CDB

    All data in this environment handles are open only with the DB_CREATE flag.

    So, since my understanding this same basic access handles need to be synchronized, I opened separate handles foreach database for each thread (opening the handles is still single-threaded).

    In my first approach, I have only made use of the object of global environment. Which does not work and gives the following during operations error message:

    DB_LOCK-> lock_put: Lock is no longer valid

    So I thought, since the same handle global env is passed to all handles separate DB, it is perhaps a race condition critical on the handful of approx.

    So in my next test, I opened also handles separate EPS in each thread (and db handles owned each).

    That does not produce an error in db, but now it seems that each thread sees its own version of the databases (I call it stat early in the life of each thread and he sees all of the empty dbs).

    What is the right way to use the concurrent data store? Each thread should really open his own set of db handles? What about the number of open handles env?

    PS: Without specifying that the DB_PRIVATE flag seems to do the job, but for performance reasons, I want all operations to perform in the cache and do not specify product DB_PRIVATE average of several writes to the disk for my scenario.

    Thanks a lot for your help.

    CD (simultaneous database) allows a single editor with multiple drives, access to the db at a given point in time.    The handle for the writer doesn't have to be shared with readers.   If you share the DB handle then calls are synchronized, but if each thread has its own handle DB then this is not the case.     Since you have an environment, DB_THREAD must be at the level of the environment.   This will allow the sharing of the environment handle.     This type of error "DB_LOCK-> lock_put: Lock is no longer valid" you can provide us your code so we can take a look.   Also what BDB version are you using?

  • What value for p_print_server when you use the Rest Data Services as a print server?

    Hello

    I have my report server on the internal workspace defined as BI Publisher URL as default... It works for my BI Publisher reports.

    I have 2 reports who need to use the rest Data Services FOP engine and use GET_PRINT_DOCUMENT function Signature 4

    https://docs.Oracle.com/CD/E59726_01/doc.50/e39149/apex_util.htm#AEAPI146

    For these 2 reports, I need to pass a parameter to p_print_server. The documentation describes this as the URL for the print server

    My question is, how to find the URL for p_print_server when I want to use the rest Data Services FOP on ADR?

    I can not put as NULL as at the time, it will use the default BI Publisher URL and then these 2 reports will not work

    Moreover, when I put the server reports on internal workspace as 'Rest Data Services', my two reports work fine, but not when I put is as URL BI Publisher - which is necessary

    Help, please

    Concerning

    Matt

    Hi Matt Mulvaney,

    Matt Mulvaney says:

    I already have the correct configuration to be able to produce reports FOP including two steps you mentioned. This set works very well when I put the "Print server" preference in the settings of APEX Data Services Instance remains - and reports can be produced. But as I said, this must be set to a URL of BI Publisher.

    If you use "Print server" as "Oracle BI Publisher" then the p_print_server parameter would have been:

    http://myserver.mydomain.com:8888/xmlpserver/convert

    Where:

    • Printing of the address of the host server: myserver.mydomain.com (you can also use the IP address)
    • Print Server Port: 8888 (depending on what your BI Publisher Server uses port)
    • Print server script: / xmlpserver/convert

    Similarly, if you choose "Print server" as 'Oracle REST Data Services', the instance settings don't ask questions on the print settings, but internally it uses the following parameters and the p_print_server parameter is:

    http://myserver.mydomain.com:8080 / ADR / _ / fop2pdf

    Where:

    • Printing of the address of the host server: myserver.mydomain.com (host computer on which ADR is hosted)
    • Print Server Port: 8080 (the port used by what ADR/APEX Forum)
    • Script of the print server: / ords / _ / fop2pdf (this is not disclosed by APEX/ADR documentation, but as mentioned by Marc Sewtz in the following thread)

    Reference: Re: Apache FOP disappeared by Oracle APEX installable APEX 5.0 and 5.0.1 APEX

    Kind regards

    Kiran

  • How to post JSON using the Oracle Data Service remains

    I use the regular (not NoSQL or something) oracle database with oracle rest data service. Now I need to post data / put wrote in the body of the request using some json/xml format, how to consume them using the rest data service, searching inside the express application?  important: using pl/sql block

    Also, I am on,

    Data service Oracle rest 3.0

    Oracle Application Express 4.2

    Post edited by: Jacynthe

    OK, I got the answer. At the express request, there is a link called body variable (: body in BLOB). but the BLOB data type. So, we have to convert that in other data, type what we in pl/sql. I've converted data type witch CLOB supports json.

  • Y at - it a paper on how to use the map data the lookup value?

    I'm looking for a documentation where I could find information about the game of card data and how to use the map data configured according to look.

    Any help is very appreciated.

    I found a few threads on data cards:

    http://topliners.Eloqua.com/docs/doc-2434

    http://topliners.Eloqua.com/docs/doc-2817

    http://topliners.Eloqua.com/message/14058#14058

    Maybe that can help you get started

  • There seems to be a problem with the soft ware.  We use the CS6 for Records services and when we try to save the record, part of the record is stored. The record to be saved as an mp3 file is 70 to 100 KB but recently only 3 KB are generally

    There seems to be a problem with the software.  We use the CS6 for Records services and when we try to save the record, part of the record is stored.  Usually the recording to be saved as an mp3 file is 70 to 100 KB but recently only 3 KB are recorded.  What should I do to fix this?

    You may need to reset your preferences of hearing files stored in C:\Users\"username"\AppData\Roaming\Adobe\Audition\5.0. If you rename this folder in 5.0.bak that hearing won't find it when you open the next time if it will recreate a new settings with the default settings folder. See if hearing then works as expected.

  • Using the default date for a variable presentation

    In the report, I want to use the default date of the variable presentation.
    If I use the query as below, default is correctly.but if I pass the dash prompt date value that is throwing error

    Someone help me change the below
    request for valid results?

    TIMESTAMPADD (SQL_TSI_day, (dayofmonth(date @{asdf}{date '1900-01-01'}) *-1) + 1, date @{asdf} {date ' 1900-01-01'})

    Published by: user12255470 on December 2, 2010 12:11

    Published by: user12255470 on December 2, 2010 12:12

    Try this:
    TIMESTAMPADD (SQL_TSI_day, (dayofmonth(date '@{asdf}{1900-01-01}') *-1) + 1, date ' @{asdf} {1900-01-01} "")

    Mark answers quickly.

    J
    -bifacts
    http://www.obinotes.com

    Published by: bifacts on December 2, 2010 15:21

  • Impact on performance when you use the LONG data type

    Hi all


    Only I have a doubt about the use of LONG data type

    I use the data type LONG for some columns in a table and I created indexes on the columns separately, but these column values can be easily fit into the varchar data type.

    (Just to see the performance on this issue)

    Suppose that a common select query with WHERE condition on any of the column which has LONG data type will affect the performance of the query?

    Please explain.

    Thank you

    (1) the LONG (and LONG RAW) data types have been depricated for quite a while. Oracle has been strongly recommends that you move to 8.1.5 CLOB and BLOB data types. Why you use the LONG data type? You're still on Oracle 7?

    (2) have you tried to write a query that has a WHERE condition that refers to a LONG column? In general, you can not because it does not support the type of data LONG. For example

    SQL> ed
    Wrote file afiedt.buf
    
      1  create table a (
      2    col1 varchar2(30),
      3    col2 long
      4* )
    SQL> /
    
    Table created.
    
    SQL> select * from a where col2='abc';
    select * from a where col2='abc'
                          *
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    

    LONG data types are a royal pain to work with. They also have implications on the performance of wicked on the client.

    Justin

  • Sort records by using the approval date in HRMS

    HII All

    I have a table that displays the records after search
    on the search page

    now, I want to add a column over at the end of this table which shows the date of approval
    This approval approved by a Director with different ID and password

    How can I see the date of approval of the vacancy?


    and I also want to show a drop down list with options ascending and descending
    on this basis the column date of approval must be sorted

    How can I achieve this?

    Hello

    If the display attributes are available in VO (responsible for display in the columns of the table) to the corresponding fields, you can

    1.) create the areas through customization.
    2.) extend the controller and order by condition dynamically in the PFR on the VO responsible to display the value in the columns of the table.
    3.) something, you need to check if you are able to add the action event ppr on falling down during the creation of this area through customization, and even then set the ppr on this field in the PR of the extended controller method.

    If the corresponding attributes are not available you must extend the VO (responsible for display in the columns of the table) too.
    Thank you
    Pratap

  • Use the Quick Date year section in the column header

    Version 11.1.1.7

    I have a dashboard command prompt which invites for a date.

    My analysis is filtered on this prompt.

    My data looks like this:

    Accounting_Date FY1_Data FY2_Data FY3_Data

    31 AUGUST 2014 200 300 400

    AUGUST 31, 2013 275 325 450

    My requirement is to display the exercise as the heading of column (as opposed to FY1_Data, FY2_Data, FY3_Data).

    FY1_Data is the year of the accounting Date - 2.

    FY2_Data is the year of the Date of accounting order - 1

    FY3_Data is the year of the accounting Date

    So, for the first record in my table example, the column headers must be (2012, FY2013, FY2014)

    For the second disc, the column headers must be (fiscal year 2011, 2012, FY2013).

    To do this, I have tried the following to the first column only:

    I put the column header to FY@{Year1}

    Then I put my dashboard accounting cut-off Date prompt to fill a variable presentation (ActPrdDt).

    Then I created another guest of 'Variable' dashboard with a year1 variable name.

    In this prompt, I used SQL results as the default selection:

    Select 'Date_Table '. "' Accounting_Date ' from 'MySubjectArea' where 'Date_Table '. "" Accounting_Date "= @{ActPrdDt}

    The problem is that the full date is displayed in the column header (AFA 08/31/2014).

    I tried to modify the sql statement in my second guest of dashboard for

    Select TO_CHAR ("Date_Table". "Accounting_Date", "YYYY")-2 of... "

    but the column header displays all values in the column, which is usually what it displays when there is an error in the sql statement.

    Is it possible to manipulate the sql statement to get only the year of the accounting Date and subtract 1 or 2 him?


    I changed the sql statement

    Of

    TO_CHAR ("Date_Table". "Accounting_Date", "YYYY")-2 "

    TO

    extract (year of "Date_Table". "Accounting_Date")-2 "

    This solved my problem.

  • RN102 - please remove inactive volumes in order to use the drive - data loss

    ReadyNas 102, 1 x 1 TB of disk (in 2nd driving position), belonging to 10 days, using 6.2.5

    I copied perhaps 25 data DVD disk (1 volume, through about 4 servings). I have lost the network connection to the NAS server, could not get the sin to meet switch & finally upluged the power from the back, after a reboot that worked was & when reconnecting, I note ALL my data drive disappeared. No matter what I do with the power, I shouldn't be able to have this catastrophic failure.

    If I go to Volumes, it shows the volume, with a red dot in the upper left, with the data and free are two 0.

    also a ball - please remove inactive volumes in order to use the drive. Disk #2.

    If I go to actions it says: no volume or USB drives. It is recommended to create a volume before you set up other...

    If I had 2 drives (copies of each other) would I still lost all of these data. It seems that the reliability of disc is not my main concern, but the speaker.

    I've never had a pc lose all his data by car, make it look training crashed as the s/w me invites to use it. It looks like the o/s meta data, manage the contents of the disk are broken down.

    Is this code issues I may be courses?

    You have the USB key with the encryption key connected to the NAS Server? The key must be plugged to the volume mounted power.

    Looks like you will need to contact support.

  • How can I use the USRP to record a signal using its two RX ports simultaneously?

    Hello.

    I am trying to record a signal using two antenna cone. The reason that I need two antenna to cover the bandwidth (DC - 6 GHz). a single antenna covers DC - 300 MHz and the other covers 300 MHz to 6 GHz. so I need to use two RX port of USRP at the same time to record the signal. I have two questions:

    1. is this all USRP market capable of covering this frequency range?

    2. is it possible to use the two RX port at the same time to the signals of the records I described? If this is not the case, how can do?

    P.S. I have two NI2920 USRPs and two USRPs N210 in my lab.

    Thanks in advance for your time.

    Sam.

    Hi Sam,

    To answer your first question, the USRPs you can reach the bandwidth you want. There is not a USRP, to my knowledge, that can reach this range in a single device.

    Also note that you can only use RX convened for two different ports at the same time using LabVIEW and the pilot of the USRP. If you want to use the two lines of RX, you will need to run a session with a single line, close the session and then start a different session for your second RX line.

Maybe you are looking for