best practices for the storage of the vm and vhd

no doubt this question has been answered not once... Sorry

I would like to know the best practice for the storage of the vm and its virtual hard disk to a SAN.

Any show advantage does make sense to keep them on separate LUNS?

Thank you.

It will really depend on the application of the virtual machine - but for most of the applications no problem by storing everything on the same data store

Tags: VMware

Similar Questions

  • Best practices for the CFC and CFFUNCTION

    I'm curious as to what must be the best place for cffunctions.  Be in the Application.cfc file if they need to be called from various locations throughout a site.

    Or is it a better idea to put the cffunctions on one page and use cfinclude to put the page in pages where functions can be used?

    Or is there a better way to deal with them?

    I have a bunch of small savers of time code

    You can still use an inclusion, of course. You should think about CFC when certain functions share themes in common, when functions become complex, or when the number of functions becomes important.

    Maybe they should all go in a CFC as UDF. CFC.

    Yes, they could go on a CFC, but not necessarily all in a CFC. You should consolidate functions only if they express the behavior of a particular concept. For example, you could do functions that manipulate strings, such as getRandomString, part of StringManipulation.cfc.

    I think I can call one of the several functions in a CFC.

    You can call any number of functions a CFM page or function in an another CFC.

    I did not understand if a CFC is a process, or can be a collection of processes.

    A function through an instance of a component call, as follows, is a process


    However, there are ways to create two or more of these processes to run at the same time.

    I'm not too clear on where put it while it is accessible by all pages of a site.

    As Dan said, you can put inside or outside of the wwwroot folder. The important thing is to distribute your FSTC and CFC in folders so that your application will have a logical, easy to follow structure.

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • vSpere 5 Networking of best practices for the use of 4 to 1 GB NIC?

    Hello

    I'm looking for a networking of best practices for the use of 4-1 GB NIC with vSphere 5. I know there are a lot of good practice using 10 GB, but our current config does support only 1 GB. I need to include the management, vMotion, Virtual Machine (VM) and iSCSi. If there are others you would recommend, please let me know.

    I found a diagram that resembles what I need, but it's for 10 GB. I think it works...

    vSphere 5 - 10GbE SegmentedNetworks Ent Design v0_4.jpg(I had this pattern HERE - rights go to Paul Kelly)

    My next question is how much of a traffic load is each object take through the network, percentage wise?

    For example, 'Management' is very small and the only time where it is in use is during the installation of the agent. Then it uses 70%.

    I need the percentage of bandwidth, if possible.

    If anyone out there can help me, that would be so awesome.

    Thank you!

    -Erich

    Without knowing your environment, it would be impossible to give you an idea of the uses of bandwidth.

    That said if you had about 10-15 virtual machines per host with this configuration, you should be fine.

    Sent from my iPhone

  • Best practices for the compression of the image in dps

    Hello! I read up on best practices for the compression of the image in dps and I read the asset from the source of panoramas, sequences of images, Pan and zoom images and audio skins is resampled not downloading. You will need to resize them and compress them before deleting the in your article, because the dps do not do it for you. Hey can do!

    So Im also read as he active source of slideshows, scrolling images, and buttons ARE resampled as PNG images. Does this mean that DPS will compress for you when you build the article? Does this say I shouldn't worth going bother to resize these images at all? I can just pop in 300 DPI files 15 MB used in the print magazine and dps will compress their construction article - and this will have no effect on the size of the file?

    And this is also the case with static background images?


    Thanks for your help!

    All images are automatically resampled to based on the size of the folio you do. You can put in any image resolution you want, it's not serious.

    Neil

  • What is the best practice for the enumeration for the ADF?

    Dear all,

    What is the best practice for the enumeration for the ADF?

    I need to add the enumeration to my request. ex: sex, marital status.

    How to deliver? Declarative custom components or is there another way?

    Thank you.
    Angelique

    Check out this topic - '5.3 fill view object Rows with static data' in Guide of Dev
    http://download.Oracle.com/docs/CD/E17904_01/Web.1111/b31974/bcquerying.htm#CEGCGFCA

  • Best practices for the use of reserved words

    Hello
    What is the best practice for the use of the reserved words as column names.
    For example if I insisted on the use of the word to a column name comment as follows:

    CREATE TABLE...
    VARCHAR2 (4000) "COMMENT."
    ...

    What is the impact on the track I could expect and what problems should I be informed when doing something like that?

    Thank you
    Ben

    The best practice is NOT to use reserved words anywhere.
    Developers are human beings human. Humans have their moments to forget things.
    They will not forget to use the "", or you can force it to use the "' everywhere.
    The two methods are Oracle certified ways to end up in hell.

    ----------
    Sybrand Bakker
    Senior Oracle DBA

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Measurement on the side time server? Best practices for the turn-based game

    Hello

    What would be the best practice for measuring time in a turn based game?

    I was looking at the timeout of the room, but to use that it would mean that, for each round, I put users in a new room?

    Is there a way where I can measure time serverside and keep users in the same room?

    If so, I could use php, otherwise, we would need java that allows to measure a time race.

    See you soon,.

    G

    Hello

    You can definitely use PHP or Java - we provide integration of server

    libraries for either. I don't know exactly what is the use case, so I can't

    comment on what makes the most sense, but if it is not information which must be

    totally secure, grading on the client can be a viable approach also.

    Nigel

  • Best practices for the application of sharpness in your workflow

    Recently I tried to get a better understanding of some of the best practices for sharpening in a workflow. I guess that I didn't know, but there are several places to sharpen. Who are the best? They are additive?

    My typical workflow involves capture an image with a professional DSLR in RAW or JPEG or import into Lightroom and export to a JPEG file for the screen or printing of two lab and local. 

    There are three places in this workflow to add sharpening. In the DSLR manually into Lightroom and when exporting a JPEG file or print directly from Lightroom

    It is my understanding that sharpening is added to RAW images even if you add sharpening in your DSLR. However sharpening will be added to the JPEG from the camera. 

    Back to my question, it is preferable to manually the sharpness in the SLR in Lightroom or wait until you export or output of your printer or final JPEG file. And additive effects? If I add sharpening to the three places I am probably more sharpening?

    You have to treat them differently both formats. Data BULLIES never have any sharpening applied by the camera, only JPEG files. Sharpening is often considered to be a workflow where there are three steps (see here for a seminal paper on this idea).

    I. a step sharpening of capture which compensates for the loss of sharp in detail due to the Bayer matrix and the filter anti-repliement and sometimes the lens or diffraction.

    II. A creative sharpening step where some details in the image are 'highlighted' sharpness (do you eyelashes on a model's face), and

    III. output sharpening, where you fix the loss of sharpness due to the scale/resampling for the properties of the average output (as blur because of an impression how process works or blurry because of the way that a screen LCD sets out its pixels).

    These three are implemented in Lightroom. I. and II. are essential, and basically must always be performed. II. until your creative minds. I. is the sharpening that see you in the Panel to develop. You need zoom at 1:1 and optimize the settings. The default settings are OK but quite conservative. Usually you can increase the value of the mask a little so that you're not sharpen noise and play with the other three sliders. Jeff Schewe gives an overview of a strategy simple to find the optimal settings here. It is for the cab, but the principle remains the same. Most of the photos will benefit from optimization a bit. Don't go overboard, but just OK for smoothness to 1:1.

    Stage II as I have said, is not essential, but it can be done using the local adjustment brush, or you can go to Photoshop for this. Stage III, however, is very essential. This is done in the export, the print panel Panel or the web. You can't really get a glimpse of these things (especially the sharpening printing-oriented) and it will take a little experimentation to see what you like.

    For jpeg, sharpening is done already in the camera. You could add a small extra capture sharpening in some cases, or simply lowering the sharpening in camera and then have more control in the post, but generally, it is best to leave it alone. Stage II and III, however, are still necessary.

  • AppAssure best practices for the storage of machine image

    I work in a regulated environment where our machines are validated before being put into service.  Once they are put in service, changes to the machines are controlled and executed by a process of change request.  We use AppAssure to back up critical computers and with the ' always on ' incremental backup, works very well for the restoration of level file/folder in the case where something is removed, moved, etc..

    Management process asks me to perform once a machine is validated, to take a picture of it and store the image further so that if there is a hardware failure, the validated image can be restored and the machine is essentially to it of original validated state.  In addition to having the image of the machine as soon as it is posted, I need to also back up the machine on a regular basis in order to restore files, folders, databases, etc. on this machine.

    So my question is how to achieve this with AppAssure?  My first thoughts are to perform the backup of the machine Base, then archive it, and then let the AppAssure to perform the scheduled backup.  If I need to restore the computer to the Base image, then I back the Archive and then do the restore.  Is this a feasible solution and practice?  If this isn't the case, please tell us the best way I could accomplish what I want to do.

    Thank you

    Hi ENCODuane,

    You can carry out the plan of action of the WES in the following way

    1. to protect the agent with the name "[Hostname\IP_Base]".

    2. take the base image.

    3 remove the protection agent, but let the recovery points

    4. After these steps, you will have recovery basis point for the agent in the section 'recovery only Points.

    5. go to the agent computer, open the registry and change the ID of the Agent (a sign of evolution will be enough).  HKEY_LOCAL_MACHINE\SOFTWARE\AppRecovery\Agent\AgentId\Id

    for example, of

    94887f95-f7ee-42eb-AC06-777d9dd8073f

    TO

    84887f95-f7ee-42eb-AC06-777d9dd8073f

    6. restart the agent service

    7. from this point core Dell-AppAssure will recognize machine as a different agent.

    8 protect the machine with the name "[Hostname\IP]".

    If after these steps, you'll Base image for the 'Recovery only Points' machine (with the name "[Hostname\IP_Base]" which will not be changed by the update rollup and transfers) and protected the machine with the name "[Hostname\IP]" where the transfers will be made based on configured policy.

  • Required formula and best practices for the storage of data base of calculation

    Hi gurus of the Oracle

    Need your help to calculate the requirement of storage for the production database.

    Thank you

    Hitgon

    I have a query DBA_DATA_FILES show total space allocated.

    SELECT SUM (bytes) AS allocated_bytes FROM dba_data_files;

    And for 'used' space, I run the present:

    SELECT SUM (bytes) AS used_bytes FROM dba_segments;

    We don't need to digress into the discussion of what is truly used as everyone knows that there is unused space in DBA_SEGMENTS. But it works for management!

    I have a report automated that send me monthly. The same report even it breaks down by tablespace... allocated and used as I noted above. Then, I put it in Excel to generate the graph.

    See you soon,.
    Brian

  • Best practices for the storage of data the program Config on Vista?

    Hello world

    I'm looking to get recommendations about what where (and how) to better store the configuration data of the program for a LV executable runs under Vista.  I need to store a number of things like the location of the window, the values of the controls, etc.  Under XP I stored straight in the screw own execution path.  But in Vista, certain directories (for example, C:\Program Files) are now restricted without administrator rights, so if my program runs from there, I don't think it's going to be able to write its configuration file.

    Also at the moment I'm just using scripture to the block of spreadsheet to store my variables.  Is it his good or are they better suggestions?

    Thank you!

    For the configuration data, I use the setting screw or screw Configuration OpenG. The formula proved to be flexible during development (adding new values is backward compatible).

    XML would be nice, but the current implementation is unflexible when you add / change the data structure (the old configuration file cannot be reused).

    If the amount of data is small (just a post windows and size or some of the file path), the windows registry is an alternative (giving a game unique for each user, if you wish).

    I have no experience with Vista.

    Felix

  • best practices for the storage of the price of an item in the database?

    In the United Kingdom, we call the sales tax, VAT, which is currently 17.5%

    I store the price exclusive of VAT in the database

    Keep the current VAT rate for the United Kingdom as an application variable (VAT rate is fixed change here to the United Kingdom in January)

    Whenever the site displays the price of an item (including VAT), it takes the price exclusive of VAT and adds the VAT dynamically.

    I have a section in the website called "Personal Shopper" which fortunately will search the goods in a fixed priced at range, for example. a link is less than £20, another is £20 and £ 50

    This means that my search query to perform the calculation of VAT for each item. Is this normal, or is it better to have a database column that stores the prices including VAT?

    I'm also based in the United Kingdom, and that's what we do:

    In our table of products, we record a product price without VAT and an ID of VAT rates, which is attached to the wide at a rate of VAT table. So to calculate the retail price Yes, this is done at the SQL level when querying historical data. To save the net, VAT and gross would be to effectively reproduce the data is so evil. It also means that come January just to update one row in a table and the whole site is fixed.

    However.

    When a person places an order, store us the product id, amount net, tax code, VAT and VAT percentage. This way there is never any question to the evolution of the tax codes in your VAT codes table, because that will affect only direct price being listed on your website. For still more whenever pulling old order data, you have the net amount, VAT percentage of amount and VAT all coded hard in your command line to avoid confusion.

    I saw a LOT of books get confused after that a change of VAT where it recalculates in certain places on a data in real time and other command displays stored data, and there were differences.

    I saw a lot of people have problems with the tax changes before, and database space is so cheap I he stock always just against an order as a point-in-time snapshot.

    O.

  • Best practices for the storage of information from lead

    Hi all

    I use a qualification process using a script assessment of lead. Via this script our reps ask the potential customer 8 questions and a score is calculated.

    Now, the reps will begin the process to convert leads into opportunities - in order of priority according to the script of assessment score.

    Then information that is entered in the assessment script is stored in 8 fields in the lead record and is relevant to only one service in a hospital (anesthesia fx). This information is very valuable to us in the future - because it tells us about the potential prospects for the purchase of other products.

    Now, I want to make sure that this information is passed on the most appropriate way when wires are converted into accounts, contacts, and opportunities. My first thought was to create 8 new fields in the opportunity record and communicate with the record type and map the conversion of lead to these areas. Through reflection, this would help me with a large number of redundant data. Also the data will be regularly updated - which will cause problems with this solution.

    Another option is to display the fields of 8 lead and related information on the opportunity and contact record details page. Or I could pass data to the contact record and tell officials that this is where it should be updated in the future.

    I'm pretty new to OnDemand, so I couldn't be on the right track here! Any help will be much appreciated :-)

    Kind regards
    Allan

    Allan, once the lead is converted the lead record (with your 8 domains) is available as a related under the opportunity and contact records record. This allows you to make updates to these fields without having to do several times if you match during the conversion from lead.

Maybe you are looking for

  • I want to store movies and downloaded music from iTunes on my seagate more, then read on my iPad. I can do this. Thank you.

    I want to store music and movies downloaded from iTunes, on my seagate more, then watch them on my iPad. I can do this and if so, how? Thank you.

  • Cannot open the .jar files more

    Hey everybody. Recently, I can open is more .jar files more I am running OSX El Capitan 10.11.4 I am trying to install a mod for a game that is in a .jar file and have recently been unable to do so due to this message: ' The Jar Java file could not b

  • Re: Protégé M700 - default wireless settings

    Hello Maybe a strange question, but I have a M700 and by default when I login, wifi and bluetooth are active, but my 3g modem is set to off. For my use I almost never use wifi, but almost always need to use the 3g modem, is anyway that I can change t

  • Pavilion TouchSmart b159sx 15: 16 GB ram upgrade

    Hello I have HP Pavilion TouchSmart 15-b159sx Sleekbook and I want to upgrade the RAM 16 GB. My hardware specification: http://support.HP.com/us-en/product/HP-Pavilion-15-Sleekbook/5330569/model/5378819/document/c0373048... 1. is it possible with reg

  • IPSec tunnel and join a LAN router

    I have to tunnel MikroTik IPSec Cisco ASA. Cisco WAN: xxx.xxx.xxx.xxx Cisco LAN: 172.27.0.0/20 MikroTik WAN: .yyy MikroTik LAN: 172.27.128.0/20 This acts to Cisco configuration: access extensive list ip 172.27.0.0 acl_encrypt allow 255.255.240.0 172.