Question for LabVIEW FPGA DRAM

Hi all

How can I correctly address the 128 - bit DRAM memory?  I have the Bank DRAM 0 set as a memory of 128 bits, set up in my design as a CLIP.  I realize it's a wide RAM on 32-bit.  I had a National Instruments AE do the original design I've been adding to.  He said that the addresses needed to incrementing by four with each entry.  Example: if I had to write in consecutive addresses, I would write to the address: 0, 3, 7, 11, 15 etc, and I would like to send 128 bits to each address.  My address is calculated as: (number of pixels in a line of video + line * (number of pixels per line) for a picture of the video).  So I take my calculated address and add 4.

However, I checked an example in the finder example: example of integrity hardware flexRIO/IO/external memory/memory.  In this example, 128-bit data is sent to the memory and the address is incremented by 1 (instead of 4) each cycle clock as valid data.

Who is this?  Section of the help for this function is ambiguous.

Sets the address in external memory for reading or writing. The physical data bus for external memory is 32 bits wide (4 bytes). Each unique address value represents 4 bytes of data. Therefore, the total number of unique addresses in external memory is equal to (Memory Size in bytes)/4.  

Note  The memory interface exposed to LabVIEW FPGA is 128 bits wide. As a result, each memory write or read operation accesses four different address locations in memory. The memory controller latches this signal value only when you issue a new memory write command by asserting the Command_Write_Enable signal.

I'm confused by the 2nd paragraph "every Scripture memory or read operation four access address locations of memory."  Does that mean I increment the address by 1 to get 128 consecutive bits 'locations' (Yes, I know, that's 4 words of 32 bits in memory), or do I increment the address by 4, in the order of words of 32 bits 4 by 128-bit single transfer?

Thanks for your help.

-J

Hello J,

I want to clarify my previous post.  There are two ways to access memory DRAM, CLIP (that you have described is what you do) and using the memory node.  As noted before, the DRAM is 128 bits wide.  When you write to the CLIP you basically write pieces that is the width of the databus (in this case 32-bit).  Therefore, when you write a total of 128-bit DRAM, you place 32 bits in each address.  The address being the width of the databus, then you write with a writing & the address 0, 1, 2, 3.  Then the next write will be 4, 5, 6, & 7 and then address 8, 9, 10, & 11 and so on.  In this case, you must increment your address by 4 whenever you write.  Note that you start at 0, then 4, then 8, etc 12.  In your previous post, you were out of a figure.

There is also another way to write in the DRAM memory, and it is through the node of memory, which is what is used in the example that you are pointing out.  Here, LabVIEW takes on some of the thought, and instead of being the width of the databus address, they are the width of the entire segment of 128 bits.  So when you write to DRAM here, you only increment 1 whenever address because they refer to any segment of memory.  This contrast with the CLIP, address 0 of the memory node interface match the addresses 0, 1, 2, 3 in CLIP mode & and address 1 of the memory node would correspond to 4, 5, 6, 7 in CLIP mode addresses &.  If you do not write an integer of 128 bits for the memory node, then the remaining addresses in the data block are filled with "junk" so that the address remains constant.

As I mentioned previously, it is the most effective writing in chucks of 128 bits so that you don't waste all of the DRAM.  I hope you find this explanation clearer.

Brandon Treece

Technical sales engineer

National Instruments

Tags: NI Software

Similar Questions

  • Support for LabVIEW FPGA

    The President complied,

    Is LabVIEW FPGA support for numbered kit ML-505 (Type FPGA: Virtex 5 LX110T) available or not? If Yes, where is it?

    The only FPGA that is supported by LabVIEW FPGA is included in National Instruments hardware. You can't target the other with LabVIEW FPGA kits.

  • Need Board SPARTAN 3rd driver for Labview FPGA 8.5! THX

    Hello!! I need 3 spartan driver for labview 8.5 (not for labview 8.6) for school Ultimativa try to download from ftp://ftp.ni.com/outgoing/NISPARTAN3ELV85.zip
    but this link is dead now. Does anyone could send the driver to [email protected] PLZ? Thank you very much for all in advance

    kongleelk,

    You can find the file in the url above for a few days. Let me know if you have any problems.

  • 64-bit driver for LabVIEW FPGA Xilinx SPARTAN 2009 3rd starting Board

    My dear I need this add-on

    We I install the module i hava, it seems

    Support for LabVIEW for Spartan-3E (incompatible with the 64-bit platform)
    is their a supprt 64-bit version?

    Best regards

    Hello mangood,.

    There is unfortunately no way to use the driver on 64-bit Windows. You will need to use the 32-bit operating system to use the Spartans drivers. Sorry for the inconvenience.

  • Version of the C API for LabVIEW FPGA 2011

    What is the version of the C API that will work with LabVIEW FPGA 2011?

    I guess as this one: http://www.ni.com/download/fpga-interface-c-api-2.0/2616/en/

    Version numbers seem to start by 2012 years.  It's the latest version I could find before 2012 and he was released in August 2011.  This time coincides with the annual festivities of the NOR week where a large part of the software/hardware is released.  It's a small download, so it shouldn't be difficult to download it and try it.

    But, you'll still need LabVIEW FPGA development according to this white paper: http://www.ni.com/white-paper/9036/en/

  • LabVIEW FPGA: Integration node clock wrong

    Hello

    I'm having some difficulties to understand how the clock is part of the node IP for LabVIEW FPGA and was hoping to get some advice.

    What I try to do is to set up a digital logic circuit with a MUX feeding a parallel 8-bit shift register. I created the schema for this Xilinx ISE 12.4, put in place and can't seem to import the HDL code into an intellectual property node. When I run the VI, I am able to choose between the two entries for the MUX, load the output in the shift register, clearly the shift register and activate the CE.

    My problem is that when I switch to the entrance of THIS, he should start 1 sec shift (Boolean true, SCR, High, what-have-you) in the registry once each clock period. Unfortunately, it instantly makes all 8 bits 1 s. I suspect it's a question of clock and here are some of the things I've tried:

    -Specify the input clock while going through the process of configuring IP nodes.

    -Adding an FPGA clock Constant as the timed loop.

    -Remove the timed loop and just specifying the clock input (I'm not able to run the VI that I get an error that calls for a timed loop)

    -Do not specify the clock to enter the Configuration of the IP node and wiring of the FPGA clock Constant to the clock input (I can't because the entry is generated as a Boolean).

    -Remove an earlier version of the EC who had two entries up to a door and at ISE.

    -Specify the CE in the process Configuration of the IP nodes.

    -Not specify this in the process of setting up nodes IP and wiring it sperately.

    -Various reconfigurations of the same thing that I don't remember.

    I think I'm doing something wrong with the clock, and that's the problem I have. Previously, when I asked questions to the Board of Directors on the importation of ISE code in LabVIEW FPGA, a clock signal is not necessary and they advised me to just use a timed loop. Now, I need to use it but am unable to find an explanation online, as it is a node of intellectual property.

    Any advice would be greatly appreciated, I'm working on a project that will require an understanding how to operate clocks the crux of intellectual property.

    Thanks in advance,

    Yusif Nurizade

    P.S. I have attached my schematic ISE and the LabVIEW project with one of the incarnations of the VI. The site allow me to add as an attachment .vhd file, but if it would help I could just paste the body of the code VDHL so just let me know.

    Hello Françoise,.

    I spoke to the engineer OR this topic and it seems that it was sufficient to verify that your code works, by putting a wait function of 500 ms on the while loop to check that the registers responsible and clear. I'm glad that it worked very well!

  • Drivers Xilinx/Multisim and Labview FPGA

    Where can I find drivers for my FPGA OR if I use Multisim/Xilinx and NOT of Labview?  All the links I found are Labview be installed.  However, the explicit manual FPGA indicates that you can use Multisim/XIlinx ISE in place.

    OK, I tested just outside. The Driver of LabVIEW 2013 DEFB contains 2 separate components, the driver and Module FPGA support. If you run this installer, it won't check if you have LabVIEW FPGA installed unless you check the box for LabVIEW FPGA support.

    I can change the text in the Installation Instructions to read "LabVIEW FPGA 2013 is required to install the LabVIEW FPGA Module Support component installation".

  • LabVIEW FPGA while loop (first call? (VI) question

    Hi gentlemen!

    I am creating a LabVIEW FPGA VI appearing in a WHILE loop. He has a first call? VI in which, in the first occurrence of the loop, a variable must be initialized to some value. However, when I incorporate the VI in the FPGA, it would seem that the first call? VI has not been called. I also tried this implementation through registers at offset where the registry is initialized outside of the WHILE loop. However, the result is always the same. May I ask how the LabVIEW FPGA functions when it comes to everything IN a loop? Thank you very much!

    For some reason that I don't me remember not I avoided the use of FPGA FirstCall and instead, I use a change sign boolean, son of genuine in the terminal, on the left and a fake in the while loop to the right Terminal. As a result, you get a true for the first iteration only, in exactly in the same way that the function of FirstCall. It could even use fewer resources on the FPGA?

  • Question about support for LabVIEW DLLS and Unicode

    Hello

    I have a question about LabVIEW and DLL functions calls.

    I use a DLL (sorry, I can't share it) that was written in C. It was written to support Unicode and non-Unicode function calls.

    The Unicode function is valid, then FunctionNameW is called if FunctionNameA is called.

    I am building a few VI to access the library. I have the regular functions of FunctionNameA work.

    My question is, does LabVIEW support versions of function FunctionNameW Unicode, and if so is it necessary Although LabVIEW is already working with the standard function call?

    Am I being redundant or what should I build in Unicode support?

    The first time I tried to test the Unicode functions, I had an error, and I guess this is a system setting.

    Thank you for your time in advance.

    DB_IQ wrote:

    I don't think I have TO implement the Unicode, but I want if I can.

    For what I do, I think almost it is not serious. But I wanted to know if it could be used.

    The short answer is "Yes, you can do it."  However, it may open a new Pandora's box.  If you're not careful, problems and complications that can still spread to other projects that are not using Unicode!  It is better not to summon this monster unless there is absolutely no other way to do the job.

  • Model a block synchronous dual-port RAM with LabVIEW FPGA

    This question caught my attention recently.

    I am trying to model a particular design element called "RAMB4_S8_S8" with the LabVIEW FPGA module. This element is a block synchronous dual-port RAM allowing simultaneous access to two ports independently from each other. That being said, a port can perform read/write operation to this RAM while at the same time, the other port might be able to do the same thing. There are two opportunities of possible port conflict, however. The first is when both ports are trying to write to the same memory cell. The other scenario is when a port writes in a cell memory while at the same time the other port reads from it. Other than that, everything should be a legitimate operation.

    In order to reproduce this I select memory block that is integrated into my FPGA target. An interface is configured to be the playback mode, and the other is set to write fashion. For the option of arbitration, I let the two interfaces to be "arbitrate if several applicants only. Then I got a compiler error when I tried to run my FPGA code for this model in a SCTL. The error message is something like "several objects to request access to a resource through a resource configured with option interface" arbitrate if several applicants only ", which is supported only in the single-cycle Timed loop if there is only a single applicant by interface.

    This error goes away if I replace the SCTL with a simple while loop, but not what I would like to implement. So I wonder if there is a better solution to this problem, or is it just the limitation of the LabVIEW FPGA module.

    Thank you.

    Yes, you can use a form of conduct to perform the operations you want in the generations clock cycles, but all the code is inside a single SCTL. Basically, read the first address and storing in a register in a single cycle and then read the second address in the second clock cycle. This would allow you to two readings of valid memory every clock cycle 2. I have included a crude extract to illustrate the concept. The case selectors are identical with address A being connected to the memory in the true case, B in the case of fake address. Your biggest model memory dual port will be intact, but it will operate at 1/2 rate.

    Take a look at the white paper that provides more details on the construction of memory:

    Data on a target FPGAS (FPGA Module)

    The ball on the memory block indicates that memory block double port cannot be applied in a configuration of reading, which is a double ROM. access read/write port must be imitated with custom code.

  • LabVIEW FPGA on average

    I created a Labview FPGA .vi using a structure flat sequence that shows the output of a sensor at a sampling frequency of 1 kHz on a digital SPI.  After reading, I write the point data fixed in a FIFO, which is read by a host vi and finally written on the hard disk for post-processing.  I need to add logic for the calculation of the average for the further process the signal FPGA vi.  I want to continue at the exit of the original 1 kHz sampled datat to the FIFO, but also perform a sprawl on the steps and write these results at the same frequency of 1 kHz to the FIFO.  The average feature, I would like to implement is a two-step process.  Step 1 is to take samples of 1 kHz and perform an average of 16 samples based frame.  In other words, I want samples of sum 16 1 kHz and dividing by 16 and decimate 16:1, which produces data of 62.5 Hz.  Step 2 is to take 62.5 Hz sampled data and perform a moving average of 16 samples on these data and output resulting at the same sample rate of 62.5 Hz.  I want these 62.5 Hz sampled data to be injected into the FIFA as well as the original data of 1 kHz sampled (unmodified) at the frequency of 1 kHz.

    I've got step 1 work correctly using the block "mean, Variance, StdDev FPGA vi" with number of samples on 16.  This block runs within a sequence of flat sequence structure after I received each sample 1 kHz on the SPI.  My fight is the average feature mobile step 2.  I try to use the code in the screenshot below, but am unclear regarding how/where to implement this logic inside is my structure flat separate sequence while loop, structure of the case, etc, in order to ensure that it only works on one of 62.5 Hz samples to this flow of data at once.  I tried to put it inside the sequence that executes the block average and further in a case that is driven by the Boolean "valid" the average block output.  I obviously don't understand how these different loops run, because it does not work properly.  Can someone tell me how to implement the logic of moving average in my vi FPGA existing to produce the desired results as described above?  Screenshot below of the logic (step 2) average mobile I am trying to use.  In addition, find attached my screws vi FPGA that I need help with is 'CA215_SPI.vi' and the level vi host is 'Host.vi '.  Thanks in advance.

    Joel

    This question is closed.  I realized that my approach to implementation was actually working.  I just had a stupid mistake on my fixed point output bit size, giving me results errenous.

  • Use of FIFO memory on two areas of clock (Labview FPGA) block

    Greetings!

    I'm developing an application on the FPGA of the vector signal OR 5644R
    transmitter/receiver. I have two loops single-cycle timed: a 40 MHz making a convolution
    and writing a FIFO memory block and the second at 120 MHz (sample clock)
    who reads from block FIFO memory and uses the following values
    interpolation...

    Under what circumstances is it permissible to use a FIFO memory block to transfer

    values of a loop from 40 MHz to a loop of 120 MHz (sample clock)?

    The reason I ask the question, it is that the compilation of my code repeatedly of not
    reported the error below:

    ERROR: HDLCompiler:69 - "/ opt/apps/NIFPGA/jobs/J9k7Gwc_WXxzSVD/Interface.vhd" line 193: is not declared.

    I share for everyone's reference, screenshots of my code which is an extension of

    sample 'Project streaming VST' given in NI5644R. A brief description of attachments is

    given below...

    1. "Top_level_FPGA_part1_modification.png": in a loop SCTL 120 MHz, a sub - vi bed FPGA

    go a block FIFO memory... In fact, the reading is actually made when entry

    "read_stream" is activated... (see details in read_from_fifo_true_case.png)

    2. "Top_level_FPGA_part2_modification.png": a 40 MHz SCTL, wherein is a subvi FPGA

    called to write the output of convolution to block FIFO memory.

    3. "target_respone_fpga_block_FIFO_modification.png": an output of a convolution filter is

    written in block FIFO memory each time that the convolution output is available...

    'ReadBlockFIFO' VI (circled in Top_level_FPGA_part1) is invoked in a 120 MHz SCTL.

    4. "read_from_fifo_false_case.png": when the input "read_stream' of this vi is false,

    data transfer memory FIFO of block to a different FIFO ('generation filter") takes

    place.

    5. "read_from_fifo_true_case.png": when the "read_stream' is set to true, the data is read in

    'Filter generation' FIFO and spent on the chain of later interpolation to the

    120 MHz SCTL...

    I hope that the attachments give enough clarity to what I'm doing... If we need

    For more information, do not hesitate to ask...

    Kind regards

    S. Raja Kumar

    Greetings!

    I think I understand the problem... The error probably occurs because a DMA FIFO

    (FPGA host) is playing at 40 MHz, and it is checked for the number of items in a loop

    120 MHz... It is not captured by the "pre-processing" by the labview FPGA, but by the Xilinx

    compilation phase synthesis tool.

    A lesson I share, is that if you observe this kind of problem, watch if there is incompatibility

    in the areas of the clock to access a FIFO...

    Kind regards

    S. Raja Kumar

  • LabVIEW fpga xilinx ise vs

    Hi all

    I'm new to fpga and my question is fairly simple which is best?

    LabVIEW fpga and xilinx ise platform?

    or does rely on demand?

    I'm not familiar with these protocols, so I can't answer the question precisely.

    NOR has several FPGA products with high-capacity chips.  I guess that they could manage the protocols, but I can't make any promises.

    Unless you're already an expert ise, I don't think you're going to end up with a more effective than LabVIEW code.  I guess that's a possible higher capacity chips are available for ise as LabVIEW, but I don't know.

    One thing I like LabVIEW is that you can write the code and compile it for the target without having to purchase the equipment first.  You could program the algorithm, and then understand what size FPGA, you put on.

    Bruce

  • Integration of IP node evil in LabVIEW FPGA

    Hi all

    I am having trouble with the integration of LabVIEW FPGA IP option and was hoping someone could shed some light here.

    I use a simple VHDL code for a bit, 2: 1 MUX in order to familiarize themselves with the integration of IP for the LabVIEW FPGA.

    In the IP properties of the context node, the syntax checking integration says:

    ERROR: HDLParsers:813 - "C:/NIFPGA/iptemp/ipin482231194540D2B0CC68A8AF0F43AAED/TwoToOneOneBitMux.vhd", line 15. Enumerated value U is absent from the selection.

    but I'm still able to compile. Once the node is made and connected, I get the arrow to run the VI but when I do, I get a build errors in Code Pop up that says:

    The selected object is only supported inside the single-cycle Timed loop.

    Place a single cycle timed loop around the object.

     

    The selected object in question is my IP integration node.

    I add a loop timed to the node, but even if I am able to run the VI, it nothing happens. the output does not illuminate regardless of the configuration.

    I would say that I tried everything, but I can't imagine would be the problem might be at this point given that everything compiles and the code is so simple.

    I have attached the VI both VHDL code. Please let me know if any problems occur following different boards of the FPGA.

    Would be really grateful for the help,

    Yusif Nurizade

    Hey, Yusif,.

    Looks that you enter in the loop timed Cycle and never, leave while the indicator of Output never actually is updated. Try a real constant of wiring to the break of the SCTL condition. Otherwise, you could spend all controls/indicators inside the SCTL and get rid of the outside while loop. You can race in the calendar of meeting bad in larger designs without pipeling or by optimizing the code if you take this approach, however.

  • CPU register accessible in LabView FPGA FlexRIO

    Hello people, I wonder if it is possible to get the following behaviors of Labview.  I think that it is not.

    Description of the system: application of CVI which communicates with SMU FlexRIO via controls and indicators.

    Problem: The design of a CPU-FPGA interface specification which lists the "books" as a combination of reading and reading/writing-the bit fields.

    Example:

    According to the specification, there should be a 32-bit register.  31: 16 bits are read-only, and 15:0 bits are read/write, from the perspective of the CPU.  In the world of labview, I would just do a uint16 control and indicator of uint16 and do with it.

    However, to meet the specification (written for microprocessor buses) traditional, a reading of 32 bits of an address should read back the full content of the 32-bitregister to this place (implemented as flops on the FPGA, with appropriate memory within the FPGA device mapping).  In the same way a 32 bits of an address entry must store the values in this registry (properly masking wrote at 31: 16 bits within the FPGA device).

    Is it possible for me to have a unique address (basically, a component unique labview block diagram) that will allow me to accomplish this behavior?  It seems to me that the only solution is to pack my records with bit fields that are all read, or all the read-write in order to register in the paradigm of labview.  This means that the spec should go back and be re-written and approved again.

    Thanks in advance,

    -J

    Thanks for the detailed explanation. I am familiar with the reading and writing in the FPGA registers - I did a lot of work non-LabVIEW recently with an Altera FPGA. I haven't, however, used the CVI to LabVIEW FPGA interface, I only used the LabVIEW interface. I'm not sure if your question is about the CVI, LabVIEW FPGA interface or both.

    JJMontante wrote:

    Thus, a restatement of my original question: y at - it a mechanism with the use of indicators of controls where both the FPGA AND the CPU can write to the same series of flip-flops in the FPGA?   If I use an indicator, the FPGA can write to the indicator, but the CPU cannot.  If I use a control, the CPU can write in the control, but can't the FPGA.  Is this correct?

    On LabVIEW FPGA, a control and indicator are essentially identical. You can write a check, or read a battery / battery, using a local variable in the FPGA code. It is common to use a single piece of front panel to transfer the data in either sense, and it's okay if it's a command or an indicator. For example, a common strategy uses a Boolean façade element for handshake. The CPU writes a value to a numeric control, and then sets the value Boolean true to indicate that the new data is available. FPGA reads this numerical value, and then sets the Boolean false, which indicates the processor that the value has been read. The LabVIEW FPGA interface (side CPU) covers also all elements of frontage on the same FPGA whether orders or the lights--they can be as well read and written.

    That answer your question at all?

Maybe you are looking for