Model a block synchronous dual-port RAM with LabVIEW FPGA

This question caught my attention recently.

I am trying to model a particular design element called "RAMB4_S8_S8" with the LabVIEW FPGA module. This element is a block synchronous dual-port RAM allowing simultaneous access to two ports independently from each other. That being said, a port can perform read/write operation to this RAM while at the same time, the other port might be able to do the same thing. There are two opportunities of possible port conflict, however. The first is when both ports are trying to write to the same memory cell. The other scenario is when a port writes in a cell memory while at the same time the other port reads from it. Other than that, everything should be a legitimate operation.

In order to reproduce this I select memory block that is integrated into my FPGA target. An interface is configured to be the playback mode, and the other is set to write fashion. For the option of arbitration, I let the two interfaces to be "arbitrate if several applicants only. Then I got a compiler error when I tried to run my FPGA code for this model in a SCTL. The error message is something like "several objects to request access to a resource through a resource configured with option interface" arbitrate if several applicants only ", which is supported only in the single-cycle Timed loop if there is only a single applicant by interface.

This error goes away if I replace the SCTL with a simple while loop, but not what I would like to implement. So I wonder if there is a better solution to this problem, or is it just the limitation of the LabVIEW FPGA module.

Thank you.

Yes, you can use a form of conduct to perform the operations you want in the generations clock cycles, but all the code is inside a single SCTL. Basically, read the first address and storing in a register in a single cycle and then read the second address in the second clock cycle. This would allow you to two readings of valid memory every clock cycle 2. I have included a crude extract to illustrate the concept. The case selectors are identical with address A being connected to the memory in the true case, B in the case of fake address. Your biggest model memory dual port will be intact, but it will operate at 1/2 rate.

Take a look at the white paper that provides more details on the construction of memory:

Data on a target FPGAS (FPGA Module)

The ball on the memory block indicates that memory block double port cannot be applied in a configuration of reading, which is a double ROM. access read/write port must be imitated with custom code.

Tags: NI Software

Similar Questions

  • NEITHER 9512 with Labview FPGA Interface

    Is it possible to use the NI 9512 stepper with Labview FPGA interface drive unit or is it only possible to use it with the interface of scanning? When I try to add the module to a FPGA target, I get an error telling me that Labview FPGA does not support this module with the latest version of NOR-RIO, but I have the latest version of OR-installed RIO.

    Hi Checkit,

    You're right - the 9512 cannot currently be used in FPGA. There is an error in the documentation. The 9514 and 9516 can, however.

  • HELP - FPGA SPARTAN 3E-100 CP132 WORKS WITH LABVIEW FPGA?

    HI EVERYONE, GET ON IM TRYING TO USE MY FPGA WITH LABVIEW, BUT I DO KNOW THAT IF ITS COMPATIBLE, I INSTALLED THE DRIVERS, MODULE FPGA AND LABVIEW 2012, IM USING WINDOWS 7 32 BIT, AND AFTER I COMPILED ITS SAYS:

    LabVIEW FPGA called another software component, and the component returned the following error:

    Error code:-310601

    NOR-COBS: Impossible to detect the communication cable.
    Check the communication cable is plugged into your computer and your target. Also, verify that the proper drivers are installed.

    Thank you.

    =)

    Hi dvaldez2.

    LabVIEW FPGA offers no support for any material to third parties, other than the 3rd Spartan XUP Starter Kit. These are probably the drivers you downloaded.

    http://digital.NI.com/express.nsf/bycode/Spartan3E?OpenDocument&lang=en&node=seminar_US

    However, this driver supports only the Starter Kit Board itself (http://www.digilentinc.com/Products/Detail.cfm?NavPath=2, 400, 790 & Prod = S3EBOARD). You may not use the driver with any other Xilinx FPGAS.

    I hope this helps.

  • Hard drive size and the RAM with Labview 2009

    Hello

    I need to know please, how can I get the HDD of the hole size and the size of free hard drive using LabVIEW.

    Same for the RAM: size of free space and hole size.

    Kind regards

    Take a look at this example (and the thread):

    http://forums.NI.com/T5/LabVIEW/size-of-RAM/m-p/63942#M39616

  • Laser digital lock with Labview FPGA?

    Hello

    Sorry to bother if you are not interested in this issue of digital signal processing. We are looking for a possible digital solutions to our problem locked frequency cavity closed-loop laser (see attached PDF file for more details).  The goal is to flatten the PZTs transfer function (cancel the resonances and anti-resonances and their phase shift matching) in the frequency domain, in addition to the normal PID control.  Input/output necessary voltage signals are small (we have our own amplifiers high power for the PZTs), and their bandwidth must be at least of 50 kHz (100 kHz would be optimal).

    Among various OR hardware/software (DSP, FPGA, cRIO etc.), would anyone recommend a cost-effective solution for rapid prototyping?

    Thank you!

    I would like to look at the FPGA PXI cards nor 7854r.  I rate of 750 kHz, 1 MHz AO.  According to the involved treatment, you might expect between 200 and 750 kHz closed control loop.  If the treatment is very intense, it's probably something less than 200 kHz.

    That said, the key to these performance levels is not trivial and great care and attention to detail must be used in the coding of the FPGA.

    Good luck

  • Move from LabVIEW FPGA block of ram address to node CLIP?

    Hello

    I need to pass an index memory RAM of LabVIEW FPGA block to a CLIP node to the node CLIP to have access to the data in the BRAM.  The node of the ELEMENT contains an IP address that we developed and the IP address is the use of Xilinx BRAM driver to access data.  I guess that we need to move the physical address of the BRAM to the ELEMENT node.

    Is this possible? If so, how? If this is not the case, what would be an alternative?

    Thank you

    Michel

    If I understand you correctly, Yes, you should be able to use the memory block of the Xilinx pallet Builder in LabVIEW FPGA and in the loop of the single Cycle, connect the ports of this block signals CLIP exposed by the IP of your colleague. You may need to tweak/adapt some of the signals slightly to the LabVIEW data flow.

  • I have a model end of 2013 13 MacBook Pro with 4 GB or RAM retina and think to upgrade to El Capitan. Is this enough RAM

    I have a model end of 2013 13 MacBook Pro with 4 GB or RAM retina and think to upgrade to El Capitan. Is this enough RAM

    Of course you can upgrade to El Capitan. But I think better if you can change the internal hard drive (SATA drive) in the SSD drive. You will get better results, faster than still, you only have 4 GB. This is mine... MacBook pro in early 2011.

  • R.620 with two INTEL X 520 Dual Port 10 GB DA / SFP +.

    We have R620s running for our VMware ESXi environment. Currently, each server has an Intel x 520 with sfp + ports 10 GB (2) and (2) gig ethernet ports. Each server also has a Broadcom 4 gig ethernet card port. Can I exchange the Broadcom card on each server with a different Intel x 520, so that each server has ports sfp + 10 GB (4) and (4) gig ethernet ports?

    Thank you.

    Hello

    jmartinich
    Ports sfp + 10 GB Intel x 520 with (2) and (2), gig ethernet ports

    Which resembles the daughterboard X 520/I350 network. You have only a NDC slot in the system, only one of those can take place. I do not show a similar adapter PCIe version.

    If you have 2 PCIe 3 slot CPU configuration while you might add just another X 520 dual port. If you have a PCIe 2 connector configuration so I am not all ports that you are wanting to get a valid population.

    Thank you

  • Cannot send/receive at full speed with Intel dual port 10 GIG 82599 ethernet in guest on ESXi 5.5 OS

    Me to consider whether or not it is possible to send/receive to 20 Gbps throughput with Intel 82599 dual port card within a guest VM on ESXi. I use to send and receive traffic PF_RING DNA of ntop. I have configured the dual-port card Direct IO mode to allow the guest VM have a direct access to the card. For the traffic generator, I use a metal machine naked with another dual port ethernet to generate traffic. The generator of nu metal machine and ESXi have their double ethernet port directly connected via SPF + cables.

    In the first experiment, the traffic generator generates 10 Gbps of traffic on port 1. The guest VM is able to capture packets at full speed without any loss of package even for 64 byte packets, which translates into 14mil PPS. I had then invited VM pass to 10 Gbps for the traffic generator, which currently receives. However, guest VM was able to generate about 7 Gbps of traffic regardless of package size. The CPU seems to be fine at around only 20% of load. I do not understand why the VM can receive at full speed but could not pass to the line rate.

    In the second experiment, the traffic generator generates 20 Gbps of traffic on ports 2. The guest VM unfortunately can only capture at the rate of 5 to 6 Gbps per port for a total of 10-12 Gbit / s. I have the receiving process listens on each port on a different processor. Then, I got VM generate 10 Gbit/s on each port. Unfortunately, could not reach 5 to 6 Gbps per port, regardless of package size.

    The naked metal machine is only an AMD 2.2 ghz machine low range to four hearts. It can send full line ate 10 Gbps for gpbs port 1 and 20 for 2 ports. The ESXi server is actually a much beefier machine with the quad core Intel Xeon 3.2 Ghz. There is a lot of memory and CPU on ESXi machine.

    5.5 runs the ESXi server. I have 1 comments running virtual machine running experience.

    I'm quite puzzled why there is no bottleneck even in Direct IO mode. Both CPU memory seem to be good. I wonder if there is any setting that I need to be playing with in ESXi. I appreciate all help. Thanks in advance.

    It turned out that I placed the network card in the pci slot 3 which is only x 4 routing. I moved it to 1 (x 8) slot. Now, I get full 20 GIG.

  • Cannot send/receive at full speed with Intel dual port 10 GIG ethernet in ESXi 5.5 82599

    Me to consider whether or not it is possible to send/receive to 20 Gbps throughput with Intel 82599 dual port card within a guest VM on ESXi. I use to send and receive traffic PF_RING DNA of ntop. I have configured the dual-port card Direct IO mode to allow the guest VM have a direct access to the card. For the traffic generator, I use a metal machine naked with another dual port ethernet to generate traffic. The generator of nu metal machine and ESXi have their double ethernet port directly connected via SPF + cables.

    In the first experiment, the traffic generator generates 10 Gbps of traffic on port 1. The guest VM is able to capture packets at full speed without any loss of package even for 64 byte packets, which translates into 14mil PPS. I had then invited VM pass to 10 Gbps for the traffic generator, which currently receives. However, guest VM was able to generate about 7 Gbps of traffic regardless of package size. The CPU seems to be fine at around only 20% of load. I do not understand why the VM can receive at full speed but could not pass to the line rate.

    In the second experiment, the traffic generator generates 20 Gbps of traffic on ports 2. The guest VM unfortunately can only capture at the rate of 5 to 6 Gbps per port for a total of 10-12 Gbit / s. I have the receiving process listens on each port on a different processor. Then, I got VM generate 10 Gbit/s on each port. Unfortunately, could not reach 5 to 6 Gbps per port, regardless of package size.

    The naked metal machine is only an AMD 2.2 ghz machine low range to four hearts. It can send full line ate 10 Gbps for gpbs port 1 and 20 for 2 ports. The ESXi server is actually a much beefier machine with the quad core Intel Xeon 3.2 Ghz. There is a lot of memory and CPU on ESXi machine.

    5.5 runs the ESXi server. I have 1 comments running virtual machine running experience.

    I'm quite puzzled why there is no bottleneck even in Direct IO mode. Both CPU memory seem to be good. I wonder if there is any setting that I need to be playing with in ESXi. I appreciate all help. Thanks in advance.

    It turned out that I placed the network card in the pci slot 3 which is only x 4 routing. I moved it to 1 (x 8) slot. Now, I get full 20 GIG.

  • Intel® PRO/1000 PT Dual Port Server adapt (PCI Express) with ESX/ESXi 4?

    Hello

    I try to work to the next with VMWare ESXi 4 card worsk.

    The card is the

    Intel® PRO/1000 PT Dual Port Server adapt (PCI Express)

    Who uses the: Intel® Gigabit controller 82571 GB

    However, in the VMWare HCL, he mentions the Intel 82571EB Gigabit Ethernet controller

    These are two elements compatibile, if VMware ESXi will work with this card?

    I am hoping that someone out there has tried it can give me an idea.

    Thank you

    Ward.

    Virtually all Intel devices are supported...

  • Satellite M70 376 Dual Channel Ram?

    Hello
    I want to know who my Satellite M70 376 has Dual Channel Ram?

    Hello

    I can confirm the information that the laptop uses and is compatible with DDR memory modules.
    As far as I know there should be 2 memory banks that allow you to insert two modules.
    As mentioned, you might improve memory to max 2048 MB of ram. (2x1024Mo)

    Greetings to all ;)

  • 8 GB RAM with ssd and ram 12 GB without ssd cache cache

    We are looking to buy a laptop HP Envy 17 t j100 touchsmart for our son.  Which would be better:

    8 GB of RAM with SSD exceleration 24gig or 12 GB of ram without ssd cache memory?

    The cache of WSSD 24gig offers performance very similar to a full SSD drive for a much lower cost. I would buy the unit with the drive cache of the WSSD 24 concert and as little memory as they sell me and then increase the RAM me but 8 GB of RAM it's frankly more than many. In fact, it's probably better than 12 because there 2 identical modules allowing the functioning of memory dual channel. To summarize, the best option is clearly 8 GB RAm and the WSSD of 24 concert disk cache.

  • HP ZBook 17: Only one Dual Port EC2000S (RTL8111E, r8169) ExpressCard Port detected

    IK have the problem of this network only a Port Dual Port ExpressCard, StarTech EC2000S (RTL8111E, r8169) is detected.

    The problem occurres in Windows-7 and Fedora linux. (The system is dual-boot).

    I have updated BIOS updated to the latest version 01.33.

    The same card is ok in a HP's Probook 4720.

    Who can help?

    The Startech EC2000S dual port network adapter does not have Word with systems of cpu with Haswell as with the Ivy Bridge chips.

    There is therefore no sulution for this problem.

  • Switch ports connected with cross cable on the same switch...

    Hi experts,

    Just got a doubt.

    If both a stand-alone switch ports are connected by a crossover cable, how will be the stp of attitude toward this scenario...

    Thank you

    Arun

    Hi arun,

    As we all know the switches use according to criteria of application PLEASE:

    (1) to root the lowest BID

    (2) a low-cost path to the bridge root

    (3) OFFERED the lowest sender

    (4) minimum port priority

    (5) the lowest port id.

    So in your senario until number 4 allvalues will be equal and there will be a match tie. So for port breakage id will be used and the port with the higher port number will be blocked and low port id will be in a State of fwding.

    Example:

    If you've been through e1/1 and e1/2 cable on the same switch then 1/1 b in condition of fwding all-in-1/2 will b in the blocking state.

    HTH,

    Kind regards

    Shri :)

Maybe you are looking for