ReReplace best practices Question

On my page, I have a form with a text box. I want to replace the ampersands and quotes with the correct code. My question is, is it better to run rereplace it that I take the formdata and store it in my database or should I treat as in the query when CF assembles the page?

Thanks for any help/tips

This is how it is to be used now.

I think you will find that as organizations acquire more data that find other ways to use the data in the future.

The general rule is to store data as neutrally as possible and format it for the desired display when it is displayed, in this case in html.  Thus, when this time arrives in the future that they want some otherwise disply you do not take the problem of the deconstruction of the HTML stored in the database and convert it to a format.

Tags: ColdFusion

Similar Questions

  • change the vswitch, best practical question

    Hello

    Here is the scenario, I invited XP and I want to spend the vswitch he connected too.

    Is it safe to simply change the properties of comments in the vshpere client, set up the different vswitch, select ok. (without closing comments)

    Here is what happened last week and I would like to get feedback to see if I did something outside best practices.

    I have a xp machine who's job it is to move files within our company.  In house app, had problems with the performance, of course the bandwidth network has been a problem but also a sustained the 100% CPU usage whenever the application in the House is running. I made a few changes, first I changed the network card to a vswitch with no other guests connected, than giving a non-shared on the network connection to this comments

    I stopped our applications in the House that copies the edited files the client settings, change to the another vswitch, selecting ok...  Everything seemed fine, restarted apps and found no problem.

    The next day, I increased the RAM on the host of Meg 512 to 1024 Meg as the available physical ram was weak, and I suspected disk cache.  Stop the guest computer, editing the memory and it has increased from 1024 to vsphere...  Restarted and is the reason for my questions, which connects to run the application has been corrupted and would not load the profile.

    I should add that the application users often use the method "end task" through windows to complete the process as soon as they are sometimes does not.  Not something I tend to do what I think may be a cause of file corruption.

    My boss suggested that he believes that my approach is the cause of profile corruption, citing specifically the way in which I changed the vswitch that the guest has been connected.  My understanding at this point is that my approach was equivalent to not patch a machine and plug in another switch, and I don't see how, which could cause a windows become corrupt profile.

    Ideas of the community?  Expect honesty burtal if my method is in error.

    Change the who vswitch a virtual computer to connect during operation is perfectly accetable as only the switch that you move to can access the subnet of the virtual computer is configured for - it's like unplugging a [machine to go to an actual physical switch and plug it into a new physical switch -

  • Group by best practical question

    Consider this example:

    TABLE: SALES_DATA

    firm_id | sales_amt | d_date | d_data

    415. 45. 20090615 | Lincoln Financial
    415. 30. 20090531 | Lincoln AG
    416. 10. 20081005 | AM General
    416. 20. 20080115 | AM General Inc.

    I want the output to be grouped by firm_id with the sum of sales_amt and the d_data
    This corresponds to the last d_date (i.e. max (d_date))

    Application project:

    Select sum (sales_amt) total_sales, substr (max (d_data), firm_id, instr (max (d_data), ' ~') + 1) firm_name)
    Select firm_id, sales_amt, d_date | '~' || sales_data d_data
    )
    Firm_id group

    output is as expected:

    firm_id | total_sales | firm_name

    415. 75. Lincoln Financial
    416. 30. AM General

    I know it works, but my QUESTION is: is there a better way to do and is the method described above to concatenate the columns when you want to group several columns against best practices.

    Thank you very much!

    Here is a way that uses analytical (I just like them):

    SQL> select * from sales_data;
    
                 FIRM_ID            SALES_AMT D_DATE               D_DATA
    -------------------- -------------------- -------------------- ------------------------------
                     415                   45 15-JUN-2009 00:00:00 Lincoln Financial
                     415                   30 31-MAY-2009 00:00:00 Lincoln AG
                     416                   10 05-OCT-2008 00:00:00 AM General
                     416                   20 15-JAN-2008 00:00:00 AM General Inc.
    
    SQL> select firm_id, sum_amt, d_data
      2  from
      3  (
      4     select firm_id, d_data
      5           ,sum(sales_amt) over (partition by firm_id) sum_amt
      6           ,row_number() over (partition by firm_id order by d_date desc) rn
      7     from   sales_data
      8  )
      9  where rn = 1
     10  ;
    
                 FIRM_ID              SUM_AMT D_DATA
    -------------------- -------------------- ------------------------------
                     415                   75 Lincoln Financial
                     416                   30 AM General
    
  • Single user, working on both machines, best practical question

    I use Dreamweaver at home and at work.  I'm working on my sites only.  I have just started working with DW and don't know the best practice to achieve.  The way I do now is through a folder synchronized on both machines that hosts the site.  The establishment is a local folder on the two machines that automatically syncs on the WWW.  I created a site on my machine at home with a folder root for the site and the images and then created another site on my work machine and was pointing at the root sync folders.  Is the best practice for this set to the top.

    I guese that im afraid of is the feature of auto update of links and such to within DW.

    Thank you!

    I have the same configuration and DW Check In / Check Out feature works for me.

    This requires a local copy of the site on home and work machines more copy remotely on the web.

    I have check no matter what files I need to work, modify and then download and archive when I'm done.

    DW manages everything.

  • Best practice Question

    Hello

    I have a question for the experts in this forum. First of all, I'm a guy from network so I apologize in advance for what I am about to ask...

    What is the best appropriate practice when the following three tasks need to happen on an Oracle database. If these use the same port for all three actions, or use different ports, or is it still important?

    1 administration

    2. access to application servers

    3 replication of database

    Just looking for a little education on this subject and I look forward to the answer from the experts. :)

    Thank you

    Ok. So a quick overview of the functioning of connections to Oracle...

    First, everything that connects to the database from outside machine will contact the listener Oracle (a separate process outside the database, but the database server). The listener then hand off the connection to the appropriate database. The listener is normally configured to run on a single port (1521), although you can configure the listener to operate on any desired port or to listen on multiple ports. One can argue that the listener running on a default undefined port improves security because an attacker would have to probe potentially many ports before they found the listener. But it's quite low security account bound to the number of places at the port must be kept.

    On most Unix systems (I mean at all, but I don't know that for sure), all communication with the database is going to happen on the listening port. On most Windows systems, however, the listener will redirect the connection on another port in the range unallocated. Which tends to be a royal pain if you try to connect through a firewall. You can force Windows Server through a single port as Unix (although no doubt there are performance implications to do). You can also use the connections through a firewall proxy Oracle connection manager.

    "Administration", although potentially covers a number of different things. If you speak of "DBA connects to the database via SQL * more to do s/n-ish things", then these connections pass by the listener, there is no change. Buy you can talk about things like using the web-based Enterprise Manager (using HTTP and HTTPS on the database server if you are using the control of the database with the HTTP connection that passes on a undefined port by default (7777 is common, I think)) or by using the Grid control or using SNMP to monitor the database and related processes resulting using a number of different ports potentially.

    Justin

  • Best practical Question - update the query (see object) based on the drop-down list selection

    I have a question about the most efficient way to perform the following task:

    I create a page that contains several DVT components to display data based on specific requests.  Top of page I am hoping to have a drop down menu (selectOneChoice) that contains dates different and based on what the user selects (i.e. 2010, 2011, 2012, 2013, etc.), this will update the query in the view object of some (i.s. WHERE Date = '2011', or WHERE Date = 2013"), and then you view the appropriate data in the DVT.  What is the best way to do this - from a point of view bean managed / page, as well as the View object?  Advice/documentation would be appreciated.

    Thank you!

    When you drag the vo executeWithParams one another that will create links for operations such as 'executeWithParams2', 'executeWithParams3 '. On the method that you use to update a view by calling the executeWithParams operation, you call the other operations too.

    Timo

  • I want to confirm a best practices question - Acrobat XI forms Standard used on an Apple operating system - is this a problem?

    I'm creating forms in Acrobat XI Standard on a 64-bit Windows 7 computer. I have reports that the forms do not work and I see silkscreen where policies in the fields of the form are larger and moved than in the original form, that I created. What happens on a Apple product. So is this a problem? I read a post of 2007 saying it was. But it is still today, and if so, is it possible to deliver a formula that works with both Windows and Apple using Acrobat XI?

    Thank you

    In addition, an extract does not support the features of shape, such as JavaScript, so a form most likely does not as expected. What you describe is one of the ways.

  • Question/design architecture with best practices?

    Question/design architecture with best practices?

    Should I have separate Web server, weblogic application and IAM?
    If yes that how this time will communicate, for example can I webgate on the server that will communicate to each other?
    Any reference to decide how to design and if I have weblogic separate one for enforcement and one for IAM that how session management will take place etc.

    How is general design happens IAM project?

    Help appreciated.
    I have a business web application deployed on weblogic1 and OHS1

    deploy webtier on this OHS1 to protect the resources on OHS1. webtier will be recorded at OAM (where will be able to communicate with OAM)

    I also IAM deployed on weblogic2 & OHS2

    IAM components have weblogic. then ok. But you don't need to ohs2 here, unless your application itself requires OHS for hosting. But here, you mention that it is fair to IAM - so no need to OSH here. IAM installing requires weblogic.

    wanted to know where should I create provider (weblogic1 or weblogic2), and install webgate (OHS1 or OHS2 or both)?

    I hope it is clear now. If this isn't the case, query post for what is not clear.

  • New to ColdFusion - Question about best practices

    Hello.

    I've been programming in Java / c# / PHP for the past two years or so, and as of late have really taken taste to ColdFusion.

    The question I have is around the effective separation of the code, and if there are any best practices that are preached by using this language. While I was learning Java, I was taught that it is better to have several layers in your code; example: Front end (JSP or ASP)-> Business Objects-> support-> database. All the code I've written using these three languages followed this simple structure, most of the time.

    As I dive deeper into ColdFusion, most of the examples I've seen of vetrans of this language really does incorporate a lot of separation. And I don't mean the simple ' here's what this function only "type of online examples where most of the code is written in a single file. I was able to see the projects that were created with this language.

    I work with a few developers who have written in ColdFusion for a few years and put the question to them as well. Their response was something to the effect, ' I don't know if there is any recommended for this, but it really doesn't seem like there is really a problem, make calls like that. "

    I searched online for any type of best practices or discussions around that and have not seen much.

    I still consider myself a bit of a noobling when it comes to programming, but best practice is important to me for any language that I learn more about.

    Thanks for the help.

    You might want to take a look at a number of major frameworks available for

    ColdFusion.

    FW/1 II, model glue, CFWheels, ColdBox and Mach.  They do a great job of

    giving you a path for the separation of code, best practices, etc.

    http://www.carehart.org/cf411/#cffw

  • Question/best practice data warehousing

    I received the copy task a few tables of our production database to a database on a base (one night) once per day of data warehousing. The number of tables will grow over time; Currently, it is 10. I am interested in not only the success of the task, but also best practices. Here's what I came with:

    (1) remove the table in the destination database.
    (2) re - create the destination table from the script provided by the SQL Developer when you click the "SQL" tab while you view the table.
    (3) INSERTION IN the destination table in the source table using a database link. Note: I'm not aware of all the columns in the tables themselves that could be used to filter the lines added/deleted/modified only.
    (4) after importing the data, create indexes and primary keys.

    Issues related to the:
    (1) Developer SQL included the following lines during the generation of the table creation script:

    < creation table DDL commands >
    then
    PCTFREE, PCTUSED, INITRANS 40 10 1 MAXTRANS 255 NOCOMPRESS SLAUGHTER
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645)
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 DEFAULT USER_TABLES)
    TABLESPACE "TBLSPC_PGROW".

    It generated this snippet for the table, the primary key and each index.
    Is it necessary to include in my code, if they are all default values? For example, one of the index gets scripted as follows:

    CREATING INDEXES "XYZ". "' PATIENT_INDEX ' ON 'XYZ '. "' PATIENT ' (the ' Patient')
    -the following four lines do I need?
    PCTFREE, INITRANS 10 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE (INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645)
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 DEFAULT USER_TABLES)
    TABLESPACE "TBLSPC_IGROW".

    (2) anyone who has advice on best practices for the storage of data like that, I'm very eager to learn from your experience.

    Thanks in advance,

    Carl

    I strongly suggest not dropping and re-creating the tables every day.

    The simplest option would be to create a materialized view on the destination database that queries the database source and do a refresh materialized every evening from this point of view. You can then create a log of materialized on the source table view and then make a gradual refresh of the materialized view.

    You can schedule the refresh of the materialized view or in the definition of the materialized view, as a separate job, or creating a group to refresh and adding one or more materialized views.

    Justin

  • Best practices Apple ID

    I help the family members and others with their Apple products. Probably the problem number one revolves around Apple ID I saw users follow these steps:

    (1) share IDs among the members of the family, but then wonder why messages/contacts/calendar entries etc are all shared.

    (2) have several Apple IDs willy-nilly associated with seemingly random devices. The Apple ID is not used for anything.

    (3) forget passwords. They always forget passwords.

    (4) is that I don't really understand. They use an e-mail from another system (gmail.com, hotmail.com, etc) as their Apple ID. Invariably, they will use a different password for their Apple ID than the one they used for other email, so that they are constantly confused about which account to connect to.

    I have looked around for an article on best practices for creating and using Apple ID, but could not find such a position. So I thought I would throw a few suggestions. If anyone knows of a list or wants to suggest changes/additions please feel free. Here are the best practices for normal circumstances, i.e. not cooperate accounts etc.

    1. every person has exactly 1 Apple ID.

    2. do not share Apple ID - share content.

    3. do not use an email address of another counts as your Apple ID.

    4. When you create a new Apple ID, don't forget to complete the secondary information to https://appleid.apple.com/account/manage. It is EXTREMELY important questions your email of relief and security.

    5. the last step is to collect the information that you entered in a document and save to your computer AND print and store it somewhere safe.

    Suggestions?

    I agree with no. 3, it is no problem with using a addressed no iCloud as the primary ID, indeed, depending on where you set up your ID, you may have no choice but to.

  • Best practices Upgrade Path - Server 3 to 5?

    Hello

    I am trying a migration and upgrade of a server in the Profile Manager. I currently run an older mac mini Server 10.9.5 and Server 3 with a vast installation of Profile Manager. I recently successfully migrated the server itself out of the old mac mini on a Xserve end 2009 of cloning the drive. Still of double controls everything, but it seems that the transition between the mini and the Xserve was successful and everything works as it should (just with improved performance).

    My main question is now that I want to get this software-wise at day and pass to the Server 5 and 10.11. I see a lot of documentation (still officially Apple) best practices for the upgrade of the Server 3 to 4 and Yosemite, but can't find much on the Server 5 and El captain, a fortiori from 3 to 5. I understand that I'll probably have to buy.app even once and that's fine... but should I be this staging with 10.9 to 10.10 and Server 4... Make sure that all is well... and the jump off 10.11 and Server 5... Or is it 'safe' (or ok) to jump 3 to 5 Server (and 10.9.5 to 10.11.x)? Obviously, the AppStore is pleased to make the jump from 10.9 to 10.11, but once again, looking for best practices here.

    I will of course ensure that all backups are up-to-date and make another clone just before any which way that take... but I was wondering if someone has made the leap from 3-5... and had things (like the Profile Manager) still work correctly on the other side?

    Thanks for any info and/or management.

    In your post I keep the Mini running Server 3, El Capitan and Server 5 install the Xserve and walk through setting up Server 5 by hand. Things that need to be 'migrated' as Open directory must be handled by exporting the mini and reimport on Xserve.

    According to my experience, OS X Server facilities that were "migrated" always seem to end up with esoteric problems that are difficult to correct, and it's easier to adopt the procedure above that to lose one day try.

    YMMV

    C.

  • Code/sequence TestStand sharing best practices?

    I am the architect for a project that uses TestStand, Switch Executive and LabVIEW code modules to control automated on a certain number of USE that we do.

    It's my first time using TestStand and I want to adopt the best practices of software allowing sharing between my other software engineers who each will be responsible to create scripts of TestStand for one of the DUT single a lot of code.  I've identified some 'functions' which will be common across all UUT like connecting two points on our switching matrix and then take a measure of tension with our EMS to check if it meets the limits.

    The gist of my question is which is the version of TestStand to a LabVIEW library for sequence calls?

    Right now what I did is to create these sequences Commons/generic settings and placed in their own sequence called "Functions.seq" common file as a pseduo library.   This "Common Functions.seq" file is never intended to be run as a script itself, rather the sequences inside are put in by another top-level sequence that is unique to one of our DUT.

    Is this a good practice or is there a better way to compartmentalize the calls of common sequence?

    It seems that you are doing it correctly.  I always remove MainSequence out there too, it will trigger an error if they try to run it with a model.  You can also access the properties of file sequence and disassociate from any model.

    I always equate a sequence on a vi and a sequence for a lvlib file.  In this case, a step is a node in the diagram and local variables are son.

    They just need to include this library of sequence files in their construction (and all of its dependencies).

    Hope this helps,

  • TDMS &amp; Diadem best practices: what happens if my mark has breaks/cuts?

    I created a LV2011 datalogging application that stores a lot of data to TDMS files.  The basic architecture is like this:

    Each channel has these properties:

    To = start time

    DT = sampling interval

    Channel values:

    Table 1 d of the DBL values

    After the start of datalogging, I still just by adding the string values.  And if the size of the file the PDM goes beyond 1 GB, I create a new file and try again.  The application runs continuously for days/weeks, so I get a lot of TDMS files.

    It works very well.  But now I need to change my system to allow the acquisition of data for pause/resume.  In other words, there will be breaks in the signal (probably from 30 seconds to 10 minutes).  I had originally considered two values for each point of registration as a XY Chart (value & timestamp) data.  But I am opposed to this principal in because according to me, it fills your hard drive unnecessarily (twice us much disk footprint for the same data?).

    Also, I've never used a tiara, but I want to ensure that my data can be easily opened and analyzed using DIAdem.

    My question: are there some best practices for the storage of signals that break/break like that?  I would just start a new record with a new time of departure (To) and tiara somehow "bind" these signals... for example, I know that it is a continuation of the same signal.

    Of course, I should install Diadem and play with him.  But I thought I would ask the experts on best practices, first of all, as I have no knowledge of DIAdem.

    Hi josborne;

    Do you plan to create a new PDM file whenever the acquisition stops and starts, or you were missing fewer sections store multiple power the same TDMS file?  The best way to manage the shift of date / time is to store a waveform per channel per section of power and use the channel property who hails from waveform TDMS data - if you are wiring table of orange floating point or a waveform Brown to the TDMS Write.vi "wf_start_time".  Tiara 2011 has the ability to easily access the time offset when it is stored in this property of channel (assuming that it is stored as a date/time and not as a DBL or a string).  If you have only one section of power by PDM file, I would certainly also add a 'DateTime' property at the file level.  If you want to store several sections of power in a single file, PDM, I would recommend using a separate group for each section of power.  Make sure that you store the following properties of the string in the TDMS file if you want information to flow naturally to DIAdem:

    'wf_xname '.
    'wf_xunit_string '.
    'wf_start_time '.
    'wf_start_offset '.
    'wf_increment '.

    Brad Turpin

    Tiara Product Support Engineer

    National Instruments

  • Best practices for the .ini file, reading

    Hello LabViewers

    I have a pretty big application that uses a lot of communication material of various devices. I created an executable file, because the software runs on multiple sites. Some settings are currently hardcoded, others I put in a file .ini, such as the focus of the camera. The thought process was that this kind of parameters may vary from one place to another and can be defined by a user in the .ini file.

    I would now like to extend the application of the possibility of using two different versions of the device hardware key (an atomic Force Microscope). I think it makes sense to do so using two versions of the .ini file. I intend to create two different .ini files and a trained user there could still adjust settings, such as the focus of the camera, if necessary. The other settings, it can not touch. I also EMI to force the user to select an .ini to start the executable file using a dialog box file, unlike now where the ini (only) file is automatically read in. If no .ini file is specified, then the application would stop. This use of the .ini file has a meaning?

    My real question now solves on how to manage playback in the sector of .ini file. My estimate is that between 20-30 settings will be stored in the .ini file, I see two possibilities, but I don't know what the best choice or if im missing a third

    (1) (current solution) I created a vi in reading where I write all the .ini values to the global variables of the project. All other read only VI the value of global variables (no other writing) ommit competitive situations

    (2) I have pass the path to the .ini file in the subVIs and read the values in the .ini file if necessary. I can open them read-only.

    What is the best practice? What is more scalable? Advantages/disadvantages?

    Thank you very much

    1. I recommend just using a configuration file.  You have just a key to say what type of device is actually used.  This will make things easier on the user, because they will not have to keep selecting the right file.

    2. I use the globals.  There is no need to constantly open, get values and close a file when it is the same everywhere.  And since it's just a moment read at first, globals are perfect for this.

Maybe you are looking for

  • Change internet provider; need to access old emails new e-mail address

    I'm about to change e-mail addresses of internet service provider and, with that, and I wanted to know how can I save my old emails from course providers and be able to access it once I have the new provider. I am always eager to see some of them, as

  • Minimize, maximize, and exit buttons are not on the theme.

    I installed a theme on Windows 7 and minimize it, maximize and exit buttons do not have Firefox, even with the default theme.Is it possible to make the keys not linked Windows, or simply corresponding to the theme?Imgur mirror: http://i.imgur.com/FSE

  • HP Pavilion DV6-3060SA: HP Pavilion Mouse button Pad delivers

    Hi, I got a HP Pavilion 15.6 & quot; DV6 3060sa laptop model and the mouse pad stopped working... Well it's works sometimes other times it does not work... I know it's a hardware failure that the keyboard has stopped working, I replaced it and it's p

  • New object VI - constant variant [VI script]

    Hello world I write very important program for me and I have a problem. I use the script VI component and I'm trying to generate constant Variant. I use the special function as new object VI to this task. This very simple look, but... This implementa

  • Office jet pro 8630: Office jet pro 6890.

    The problem is: the printer will not copy.  I hit the copy function and I get a beep and nothing happens.  I tried a difficult start (turn off printer disconnect) and that didn't work.  Help, please.