Best practices for the handling of data for a large number of indicators
I'm looking for suggestions or recommendations for how to better manage a user interface with a 'large' number of indicators. By big I mean enough to make schema-block big enough and ugly after that the processing of data for each indicator is added. Data must be 'unpacked' and then decoded, for example, Boolean, binary bit shifting fields, etc. The indicators are updated once / sec. I'm leanding towards a method that worked well for me before, that is, binding network shared variable for each indicator, then using several sub-vis to treat the particular piece of data, and write in the appropriate variables.
I was curious what others have done in similar circumstances.
I highly recommend that you avoid references. They are useful if you need to update the properties of an indicator (color, police visibility, etc.) or when you need to decide which indicator update when running, but are not a good general solution to write values of indicators. Do the processing in a Subvi, but aggregate data in an output of cluster and then ungroup for display. It is more efficient (writing references is slow) - and while that won't matter for a 1 Hz refresh rate, it is not always a good practice. It takes about the same amount of space of block diagram to build an array of references as it does to ungroup data, so you're not saving space. I know that I have the very categorical air about it; earlier in my career, I took over the maintenance of an application that makes excessive use of references, and it makes it very difficult to follow came data and how it got there. (By the way, this application also maintained both a pile of references and a cluster of data, the idea being that you would update the front panel indicator through reference any time you changed the associated value in the data set, unfortunately often someone complete either one or another, leading to unexpected behavior.)
Tags: NI Software
Here's the scenario:
I have a page that shows a popup programmatically in a method of bean support. The popup asking the user a yes / no question and subsequent logical path is determined by their response. However, if the popup is visible or not in the first place is conditional. In addition, there is an additional logic in the original method, apart from the logic of popup, that must be met.
The problem with this is that the ADF seems to spin off the popup in another thread and prevents the execution of logic in the original method at the same time, while you wait for the response from the user. However, the desired effect is that the program stops until the user has answered the question in the context menu.
I was not able to understand in an elegant way to make this happen. Ideally, I think that the solution is to encapsulate the logic that occurs after the popup is displayed (or not shown) in the original method and call it from the popup action listener if the popup is displayed (if not call it the original method of). However, the logic should be encapsulated requires the use of some local variables that have been put forward for the popup to appear. There is no way to get these values to the popup action listener method to pass them on to the encapsulated logic (aside from the creation of global static variables in the bean, which seems to be a bad solution).
Another idea I had was to get the logic ' show/do not see the popup' workflow. However, it seems that for every single popup would make the really complicated workflow.
Is there a 'best practice' recommended to handle this situation? It must be a common problem, and it seems that I will talk all wrong.
However, the desired effect is that the program stops until the user has answered the question in the context menu.
This will not happen in any web environment, including ADF.
You will have different events for each life cycle:
1 - opening popup: popupFetchListener event
2 - Click on OK, Cancel buttons: DialogListener event
3 - Press the Esc button: popupCancelledEvent
You can share data between these events on pageFlowScope, or viewScope.
But if you use the ADF BC, you might be better to use transient attributes on the objects in view.
We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.
I would strongly appreciate ideas and suggestions.
How do you intend to use the time?
If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.
Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average? If so, the NUMBER would be preferable.
You are just going to display it? In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.
As Blushadow said, it depends.
I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.
Here are the options that we had:
- "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
- Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
- API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)
Two other options which may not work at all and who are potentially antimodel:
- Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
- "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM
Issues related to the:
- What is the best practice for this integration?
- Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
- What limits should consider these options? (for example API daily call, size response SOAP/REST)
If you can, I would try to talk to Informatica...
To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.
You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.
You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.
A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.
It isn't really much of anything as outgoing unfortunately use Messaging. You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).
See you soon,.
So on my box ESXI, I have a 250 GB drive. I was wondering what the best practice is to have a 'data' drive shared between VM? I'm pretty new to virtualization so would like to view
I would basically following drive configuration...
Win 2008 R2 - 60 gb
Win 2008 R2 - 60 gb
Ubuntu 10.10 - 20 GB
(Shared between the two areas of 2008) DATA - 100 GB
The only way to do this is to assign the drive to a virtual machine and create a network share. Unless you use a file system that supports concurrent access to files, an attempt to present the disk to several systems would probably end by the corruption of data.
no doubt this question has been answered not once... Sorry
I would like to know the best practice for the storage of the vm and its virtual hard disk to a SAN.
Any show advantage does make sense to keep them on separate LUNS?
It will really depend on the application of the virtual machine - but for most of the applications no problem by storing everything on the same data store
Im doing some performances / analysis of load tests for the view and im curious about some best practices for the implementation of the image master VM. the question is asked specifically regarding disk i/o and throughput.
My understanding is that each linked clone still reads master image. So if that is correct, then it seems that you would like the main image to reside in a data store that is located on the same table as the rest of the warehouses of data than the House related clones (and not some lower performing table). The reason why I ask this question, it is that my performance tests is based on some future SSD products. Obviously, the amount of available space on the SSD is limited, but provides immense quantities of e/s (100 k + and higher). But I want to assure you that if, by putting the master image on a data store that is not on the SSD, I am therefore invalidate the IO performance I want high-end SSD.
This leads to another question, if all the linked clones read from the master image, which is general practices for the number of clones related to deploy by main image before you start to have problems of IO contention against this single master image?
Omar Torres, VCP
This isn't really neccissary. Linked clones are not directly related to the image of the mother. When a desktop pool is created and used one or more data stores Parent is copied in each data store, called a replica. From there, each linked clone is attached to the replica of the parent in its local data store. It is a replica of unmanged and offers the best performance because there is a copy in every store of data including linked gradient.
What is the best practice for the enumeration for the ADF?
I need to add the enumeration to my request. ex: sex, marital status.
How to deliver? Declarative custom components or is there another way?
Check out this topic - '5.3 fill view object Rows with static data' in Guide of Dev
Recently I tried to get a better understanding of some of the best practices for sharpening in a workflow. I guess that I didn't know, but there are several places to sharpen. Who are the best? They are additive?
My typical workflow involves capture an image with a professional DSLR in RAW or JPEG or import into Lightroom and export to a JPEG file for the screen or printing of two lab and local.
There are three places in this workflow to add sharpening. In the DSLR manually into Lightroom and when exporting a JPEG file or print directly from Lightroom
It is my understanding that sharpening is added to RAW images even if you add sharpening in your DSLR. However sharpening will be added to the JPEG from the camera.
Back to my question, it is preferable to manually the sharpness in the SLR in Lightroom or wait until you export or output of your printer or final JPEG file. And additive effects? If I add sharpening to the three places I am probably more sharpening?
You have to treat them differently both formats. Data BULLIES never have any sharpening applied by the camera, only JPEG files. Sharpening is often considered to be a workflow where there are three steps (see here for a seminal paper on this idea).
I. a step sharpening of capture which compensates for the loss of sharp in detail due to the Bayer matrix and the filter anti-repliement and sometimes the lens or diffraction.
II. A creative sharpening step where some details in the image are 'highlighted' sharpness (do you eyelashes on a model's face), and
III. output sharpening, where you fix the loss of sharpness due to the scale/resampling for the properties of the average output (as blur because of an impression how process works or blurry because of the way that a screen LCD sets out its pixels).
These three are implemented in Lightroom. I. and II. are essential, and basically must always be performed. II. until your creative minds. I. is the sharpening that see you in the Panel to develop. You need zoom at 1:1 and optimize the settings. The default settings are OK but quite conservative. Usually you can increase the value of the mask a little so that you're not sharpen noise and play with the other three sliders. Jeff Schewe gives an overview of a strategy simple to find the optimal settings here. It is for the cab, but the principle remains the same. Most of the photos will benefit from optimization a bit. Don't go overboard, but just OK for smoothness to 1:1.
Stage II as I have said, is not essential, but it can be done using the local adjustment brush, or you can go to Photoshop for this. Stage III, however, is very essential. This is done in the export, the print panel Panel or the web. You can't really get a glimpse of these things (especially the sharpening printing-oriented) and it will take a little experimentation to see what you like.
For jpeg, sharpening is done already in the camera. You could add a small extra capture sharpening in some cases, or simply lowering the sharpening in camera and then have more control in the post, but generally, it is best to leave it alone. Stage II and III, however, are still necessary.
Thank you for taking the time to read this. I would like to know the "best practices" for unplugging my computer permanently to the internet and other updates. I thought I would do a clean install of Windows XP, install my Microsoft Works again and nothing else. I would like to effectively transforming my computer into a word processor. He continues more and more slow. I get blue screen errors, once again. I received excellent Microsoft Support when it happened before, but since my computer is around 13 years, I think it is not worth the headache to try to remedy. I ran the Windows 7 Upgrade Advisor, and my computer would not be able to upgrade. Please, can someone tell me how to make it only a word processor without updates or internet connection? (I already have a new computer with Microsoft Windows 7 Home Premium, it's the computer that I use. The old computer is just sitting there and once a week or so I updates.) I appreciate your time, thank you!original title: old computer unstable
Clean install XP sites
You can choose which site to reinstall XP.
Once it is installed, then you do not have to connect what anyone, however, some updates may be required to perform the work, test this by installing work and see if you get an error msg. Except that you should be fine.
We have companies everything changed at some point in our lives. And we all go through the process in the first weeks, where you feel new and are just trying to figure out how not to get lost on your way in the mornings.
On top of that, trying to familiarize yourself with your new company Eloqua instance can be a daunting task, especially if it's a large organization.
What are the best practices for new employees to learn as efficiently and effectively as possible?
I am in this situation right now. Moved to a much larger organization. It is a huge task trying to understand all the ins and outs not only society, but also of the eloqua instance, especially when she is complex with many points of integration. I find that most of the learning happens when I really go do the work. I spent a ton of time going through the programs, documentation, integrations, etc., but after awhile, it's all just words on a page and not absorbed.
The biggest thing that I recommend is to learn how and why things are made the way they are currently, ask lots of questions, don't assume not that things work the same as they did with your previous employer.
Download some base in place level benchmarks to demonstrate additional improvement.
Make a list of tasks in the long term. As a new pair of eyes, make a list of things you'd like to improve.
What are created the best practices for adding a license agreement end user to a form with LiveCycle Designer. and to force the user to signify the acceptance of the EULA before access inside the form?
Time, I have the kludged with a series of four message boxes (which is necessary because my LICENSE agreement, like most) is too long to fit in a single messageBox). 3 first message boxes have OK/Cancel buttons. If the user clicks OK, she gets the following message EULA box; If she clicks cancel the form closes. The last post (fourth) area a Yes/No buttons with the corresponding behavior. It seems that the work (I think?), but it's ugly. Is there an "easy way" to do it with a single drop-down dialog box "I agree" and "I refuse" custom buttons?
I've seen references to Tools like this but they are marked as obsolete or abandoned due to security or other unspecified concerns.
Another way might be, keep the form pages hidden at the beginning and only display a single page that has text box that contains the EULA with accept and refuse to scroll buttons. IF someone refuses, you can keep the rest of the hidden form or display message boxes to force acceptance of the EULA.
If anyone agrees, you can simply hide the EULA page, and display form pages.
I hope that used here too scripting required to reach it either.
What are the options and best practices for the Logs of the system for all newly improved 5.1u1 ESXi hosts?
Do I need to have the Syslog server or it can be safely ignored?
Syslog is preferred, but VMware has provided a collector of syslog on the vcenter installation disc can be installed on any windows host or your vcenter. I can't count the number of times where I had guests CMTF or newspapers lost... Fortunately for the syslog which collects up to the accident. It is not necessary, but it's really a good idea without any real cost since you can use your vcenter host.
Here is an article on how to install it:
What is the best practice for a "regular" VMware Server and VDI environment? A single environment (ESXi and SAN) can accommodate two if it is a whole new configuration? Or even better to keep separate?
Quick and dirty answer is that "it depends."
serioulsy, it depends really two things budget and IO. If you had the money for two without then buy two and don't have to host your server environment and the other for VDI desktop, their IO profiles are completely different.
If this is not the case, try to keep each type of use for their own dedicated LUN.
I'm looking for a networking of best practices for the use of 4-1 GB NIC with vSphere 5. I know there are a lot of good practice using 10 GB, but our current config does support only 1 GB. I need to include the management, vMotion, Virtual Machine (VM) and iSCSi. If there are others you would recommend, please let me know.
I found a diagram that resembles what I need, but it's for 10 GB. I think it works...
(I had this pattern HERE - rights go to Paul Kelly)
My next question is how much of a traffic load is each object take through the network, percentage wise?
For example, 'Management' is very small and the only time where it is in use is during the installation of the agent. Then it uses 70%.
I need the percentage of bandwidth, if possible.
If anyone out there can help me, that would be so awesome.
Without knowing your environment, it would be impossible to give you an idea of the uses of bandwidth.
That said if you had about 10-15 virtual machines per host with this configuration, you should be fine.
Sent from my iPhone
Maybe you are looking for
I use the screen shared on a landscape iPAD 2 Air... Now with iOS 10 keyboard screen is all jumbled up and unusable... Does anyone else have this problem?
During the last day, I lost access to the inputs and outputs on the DAQ Assistant. I followed the instructions posted here with no luck. Some information: I'm a cDAQ9172 with a series of modules to HAVE/AO/DIO/Relay operating. I also use a SCXI1000 w
I intend to change the GPU to a MSI GTX 2 960 and the EVGA PSU 500w 80 + Bronze. I'm just worried that I could met problems with the size of the GPU and overall coherence. Also to be honest, its my first time intend to change the components of a PC.
I tried to download separately without success. Tried to "rehabilitate" using the original installation diskettes - but the 'upgrade' button is grayed out, so it is not possible to get a fix of this way or the other...