Types of discs DC Win2008R2 best practices...

Hello

I am trying to upgrade my AD environmental to 2008R2 victory, and I ask for feedback on best practices for the selection of the disk type and the parameters of the virtual machine.  I intend to install new scratch and transfer AD ads and much more thereafter.

I intend to create a volume of 50Gb for the OS itself and a 200 GB volume for departure records and other data to the user.

In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

My drive 200 GB of user data should ideally be able to reassign to another VM intact to keep the data in case of failure.

Which drive 'provisioning' would be normal to use with these types of discs?

What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

When such a virtual disk is created, is it possible to increase / decrease its size?

Thank you very much for the comments on these issues.

Best regards

Tor

Hello.

In a virtual environment, it is adviceable to locate the sysvol and AD database in a separate location other than the default system drive?

Yes, follow the same best practices as you would in the physical environment here.

For my 50 GB disk OS (domain controller), is this normal / good practice to store it with its virtual machine?

Probably yes.  Why would you consider storing elsewhere?  You do something with SAN replication or y at - there a reason?

Which drive 'provisioning' would be normal to use with these types of discs?

I tend to use thin for the volumes to the OS.  For the absolute best performance, you can go with thick-eager discs set to zero.  If you're talking about data sharing standard file for users, then all disc types would probably be fine.  Check it out "[study of the performance of VMware vStorage Thin Provisioning | ]. "[http://www.VMware.com/PDF/vsp_4_thinprov_perf.pdf]" for more information.

What disc 'mode' (independent / persistent) it would be normal to specify in this scenario?

This can depend on how you back up the server, but I guess you wouldn't use not independent.  If you do, make sure you use persistent, you do not want to lose user data.

When such a virtual disk is created, is it possible to increase / decrease its size?

The increase is very easy with vSphere and 2008.  Very easy, and it can be done without interruption.  Shrinkage is a more convoluted process and will most likely require a manual intervention and downtime on your part.

Good luck!

Tags: VMware

Similar Questions

  • Best practices in the selection of the type of authentication

    Hello
    I use Jdeveloper 11.1.2.1. I had reviewed the security and best practices (sorry Chris!) on the selection of authentication types.

    Frankly, I prefer basic HTTP authentication because it creates a popup of connection for you (simple - less coding), but I met some documents which make me wonder if this is to be avoided.


    1. This tutorial uses an approach based on the forms: http://docs.oracle.com/cd/E18941_01/tutorials/jdtut_11r2_29/jdtut_11r2_29.html

    2. this video of Frank Nymphius (in 42 minutes) uses Basic authentication: http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/AdfSecurity/AdfSecurity.html

    3. fusion Developer Guide for Oracle Application Development Framework 11 g Release 2 (11.1.2.1.0) says:

    The most commonly used types of authentication are authentication HTTP Basic and form authentication
    It also indicates that the forms-based login page is a JSP or HTML file, [and] you will not be able to change with ADF Faces components.

    4. the States of Oracle Fusion Developer Guide (Frank Nymphius) which has a side effect of basic authentication is that a user will be authenticated for all other applications that are running on the same server - you must not use it if your application requires disconnecting...

    5. the manual of Jdeveloper Oracle 11 g tells that basic authentication must NOT be used at all (page 776) because it is used primarily for older browsers and is NOT secure according to current standards.

    I was able to use Basic authentication, and Digest Http authentication very well, did not attempt to based on the forms for the moment.

    For fun, I tried to choose the type of authentication of Client HTTPS and received this very worthy error message (and readable - wonder for java, huh?):

    RFC 2068 Hypertext Transfer Protocol--HTTP / 1.1:
    10.4.2 401 unauthorized
    The request requires user authentication. It MUST contain a header field WWW-Authenticate (section 14.46) containing a fault that is applicable to the requested resource. The client MAY repeat the request with a suitable authorization (section 14.8) header field. If the application already includes identification of the authorization information, then the 401 response indicates that authorization was refused for those credentials. If the 401 response contains the same challenge as the previous answer, and that user agent has already attempted at least once authentication, then the user SHOULD be presented the entity that was given in the response, since that entity MAY include diagnostic information relevant. HTTP access authentication is explained in section 11

    I'm sure there is one that depends on the answer to that, but I would use the most reasonable and safe type - without too much cost if possible.

    Hello

    Basic authentication makes base64 encoding and is OK to use if the site is accessed from HTTPS. The browser actually sends authentication of users with every request, which makes this approach - if used outside of https - less than optimal. The base forms authentication is easy to implement and more record only basic authentication, it sends a name of user and password to each request. The recommendation is always use HTTPS for secure sites. Most of our samples describing the connection use https as it is a configuration that is not extended within what samples are supposed to demonstrate. For safety, "without much overhead if possible" means to weaken security. In your case, if you have tried the digest authentication so I guess that's the one with the least amount of overload

    Frank

  • Registration type of best practices for updating the mapping of cascading for account drop-down list

    1. most bearing the name of choice in parent list values and related lookup has been changed in the main list of the external application, so the same should be updated in CRMOD.

    2 to update the value of the list of choices, do we need to DISABLE the existing value and CREATE a new list of selection.

    3. are there best practices to avoid the manual pick list Cascading draw for the record type account? because we have about 500 values in list dropdown to map with parent and related list of choice.

    Thank you!

    Mahesh, I recommend disabling the existing values and create new ones. This means that the remap manually cascading dropdown lists.

  • What are the best practices for creating only time data types, and not the Date

    Hi gurus,

    We use 12 c DB and we have a requirement to create the column with datatype of time only, if someone please describe what are the best practices for the creation of this.

    I would strongly appreciate ideas and suggestions.

    Kind regards
    Ranjan

    Hello

    How do you intend to use the time?

    If you are going to combine with DATEs or timestamps from a other source, then an INTERVAL DAY TO SECOND or NUMBER may be better.

    Will you need to perform arithmetic operations on time, for example, increase the time to 20%, or take an average?   If so, the NUMBER would be preferable.

    You are just going to display it?  In this case, DAY INTERVAL in SECONDS, DATE or VARCHAR2 would work.

    As Blushadow said, it depends.

  • Best practices for the .ini file, reading

    Hello LabViewers

    I have a pretty big application that uses a lot of communication material of various devices. I created an executable file, because the software runs on multiple sites. Some settings are currently hardcoded, others I put in a file .ini, such as the focus of the camera. The thought process was that this kind of parameters may vary from one place to another and can be defined by a user in the .ini file.

    I would now like to extend the application of the possibility of using two different versions of the device hardware key (an atomic Force Microscope). I think it makes sense to do so using two versions of the .ini file. I intend to create two different .ini files and a trained user there could still adjust settings, such as the focus of the camera, if necessary. The other settings, it can not touch. I also EMI to force the user to select an .ini to start the executable file using a dialog box file, unlike now where the ini (only) file is automatically read in. If no .ini file is specified, then the application would stop. This use of the .ini file has a meaning?

    My real question now solves on how to manage playback in the sector of .ini file. My estimate is that between 20-30 settings will be stored in the .ini file, I see two possibilities, but I don't know what the best choice or if im missing a third

    (1) (current solution) I created a vi in reading where I write all the .ini values to the global variables of the project. All other read only VI the value of global variables (no other writing) ommit competitive situations

    (2) I have pass the path to the .ini file in the subVIs and read the values in the .ini file if necessary. I can open them read-only.

    What is the best practice? What is more scalable? Advantages/disadvantages?

    Thank you very much

    1. I recommend just using a configuration file.  You have just a key to say what type of device is actually used.  This will make things easier on the user, because they will not have to keep selecting the right file.

    2. I use the globals.  There is no need to constantly open, get values and close a file when it is the same everywhere.  And since it's just a moment read at first, globals are perfect for this.

  • Best practices to follow before firmware Equallogic disk Upgradation

    Hi team,

    I have two Equallogic running in the same group and the default storage unique pool. Old eql is seen RAID 5 and another more recent is RAID 6. Both have 12 600 GB discs. Last week we Club the two EQL in the same group. Everything went well. Current firmware is on the two eql 6.0.9.

    We add the group to the HQ of SAN.  Now SAN HQ is showing constant alert of the firmware of the drive is obsolete for EQL OLD.

    Please let me know what are best practice before any firmware place a gradation of the EQL disc.

    necessary downtime.

    No downtime is needed. We do it on our production VM 300 vSphere cluster.

    Kind regards

    Joerg

  • What type of disc is model 7kc-00003, it is this oem or retail?

    Original title: os disk

    What type of disc is model 7kc-00003, it is this oem or retail?

    Tom

    Hello Tom,

    Thanks for posting your query in Microsoft Community. We are happy to help you.

    Please share this:

    1. How did you install Windows 7 on your computer?

    2. Was preinstalled or installed from a disc?

    If Windows 7 is pre-installed, it would then be an OEM. If you have installed by using installation media, then it would be a product of the retail.

    In addition, check with the manufacturer of the computer for more details.

    Hope this information helps. Feel free to get back to us for other queries. We will be happy to help you.

    Thank you and best regards,

    Mathias

  • Best practices storage or advice

    I try to develop a Java of BB application, currently appearing on local storage. The app will be for public use.

    Someone has advice on when it is best to use the SD vs the persistent store? Is there a good best practices or document advice out there somewhere?

    This application will have two types of data: preferences and what would be the data files on a desktop system.

    I read on using the persistent store, and it seems to be a good option because of the level of control over the data for synchronization and such. But I noticed that some OSS BB applications use the SD card, not the persistent store.

    If I'm going to deploy the application to the general public, I know that I'm working with many configurations as well as with the limits set by the policy of the company (assuming that these users can even install the app). So any advice on navigating these issues regarding the storage would be greatly appreciated.

    Thank you!

    The persistent store is fine for most cases.

    If the transient data is very large, or must be copied to the device via the USB cable, then maybe the SD card should be considered.

    However, many / most of the people do not have an SD card.

  • Best practice? Storage of large data sets.

    I'm programming a Client to access the customer address information. The data are delivered on a MSSQL Server by a Web service.

    What is the best practice to link these data namely ListFields? String tables? Or an XML file with is analyzed?

    Any ideas?

    Thank you, hhessel

    These debates come from time to time. The big question is how normally geet she on the phone after

    someone asks why BB does not support databases. It is there no magic here - it depends on what you do with

    the data. Regarding the General considerations, see j2me on sun.com or jvm issues more generally. We are all

    should get a reference of material BB too LOL...

    If you really have a lot of data, there are libraries of zip and I often use my own patterns of "compression".

    I personally go with simple types in the store persistent and built my own b-tree indexing system

    which is also j2se virtue persistable and even testable. For strings, we'll store me repeated prefixes

    that only once even though I finally gave up their storage as only Aspire. So if I have hundreds of channels that start "http://www.pinkcat-REC".

    I don't store this time. Before you think of overload by chaining these, who gets picked up

    the indexes that you use to find the channel anyway (so of course you have to time to concatenate pieces)

    back together, but the index needs particular airspace is low).

  • OEM 12 c Best Practice follow-up 11.1 DB RAC env. + datagurd

    OEM 12 c 5 release is the only monitoring in our environment... is there a better model of practice on what aspects need to be monitored and their default values?

    This doc for 12 c DB is very useful, it's best practices for high availability. They talk a lot about the RAC/DG monitoring.  https://docs.Oracle.com/database/121/HABPT/monitor.htm#HABPT003

    version 11.2 is here https://docs.oracle.com/cd/E11882_01/server.112/e10803/monitor.htm#g1011041, I read them both though, as there are some new features in version 12 c who may still apply to 11 g.

    Regarding the parameters/models go, if you create a template of the target type (monitoring sc_results.php-> create-select target type-> type > select category/target), this will include all the parameters of the type of target (including the CEO) and as close as you'll have to 'default thresholds' Oracle.  It is the best branch of the teams effort produced.   Of course, they will not be perfect for everyone, but it's a starting point!

  • Sliders - best practices

    Hi all

    This question is based on the thread: Re: best practices with the sliders with curls

    Here I've created the same script with different methods.

    1 CURSOR

    ------------------

    DECLARE

    CURSOR table_count

    IS

    SELECT table_name

    From user_tables

    ORDER BY 1;

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    BEGIN

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    I'm IN table_count

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| i.table_name;

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (i.table_name, 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    He's going to line-by-line treatment generally slow performance

    2. BULK COLLECT

    -----------------------------

    DECLARE

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    Table-name TYPE is table of the varchar2 (30);

    tNom table_name;

    BEGIN

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    SELECT table_name

    TNom LOOSE COLLECTION

    From user_tables

    ORDER BY 1;

    BECAUSE me IN tNom. FIRST... tNom. COUNTY

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| tname (i);

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (tname (i), 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    1 avoid context switching

    2 uses more PGA

    3. THE CURSOR AND IN BULK AT COST VIRES

    --------------------------------------------------

    DECLARE

    CURSOR table_count

    IS

    SELECT table_name

    From user_tables

    ORDER BY 1;

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    Table-name TYPE is table of the varchar2 (30);

    tNom table_name;

    BEGIN

    OPEN table_count;

    Pick up the LOOSE COLLECT tNom table_count;

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    BECAUSE me IN tNom. FIRST... tNom. COUNTY

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| tname (i);

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (tname (i), 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    I really don't understand why some people prefer this method is to have the two SLIDER and COLLECT in BULK

    4. IMPLICIT CURSOR

    ----------------------------------

    DECLARE

    sqlstr VARCHAR2 (1000);

    numrow NUMBER;

    BEGIN

    Dbms_output.put_line ('Start time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    FOR I IN (SELECT table_name

    From user_tables

    ORDER BY 1)

    LOOP

    sqlstr: = 'SELECT COUNT (*) FROM "| i.table_name;

    EXECUTE IMMEDIATE sqlstr INTO numrow;

    If numrow > 0 then

    Dbms_output.put_line (RPAD (i.table_name, 30, '.') |) ' = ' || numrow);

    end if;

    END LOOP;

    Dbms_output.put_line ('End time' | to_char (sysdate,' dd-mon-yyyy hh24:mi:ss'));))

    END;

    My understanding:

    It will also gives better performance compare that loops of CURSOR

    Given that the 4 methods above do the same work, please explain how to choose the correct methods for different scenarios. That is what we have to consider before choosing a method.

    I asked this question on asktom a few years ago Tom Kyte. He recommended that the implicit cursors:

    • they have a size of 100 (not 500) extraction;
    • PL/SQL manages the opening and closing of the cursor automatically.

    He mentioned one important exception: If you need to change data and not just read it, you need in "bulk" so much read as write operations - and that is to use the FORALL.

    To use FORALL for writing, you must use the COLLECTION in BULK for reads - and you should almost always use LIMIT with the COLLECTION in BULK.

    So, to make it "in bulk the Scriptures ', use FORALL. For the ' readings in bulk "preparing the ' written in bulk ', use BULK COLLECT with LIMIT. For the ' readings in bulk "when you change all the data, implicit cursors are more simple.

    Best regards, stew Ashton

  • Dimension design best practices

    Hello

    I'm about to start a new project!

    Do you have ideas on best practices to define dimensions? A presentation or conference will help.

    I ask this question because it's kind of a mix between an art and a science.

    And the current metadata provided seems to have redundancy on their GL segments.

    I don't want to go and map each segment in each dimension, I think it would be counterproductive.

    Thank you for your comments

    Concerning

    You may be able to get some advice from the technical point of view by searching online or via the database administrator's guide.

    If you want to get this from the functional point of view, you will need professional help. The design will depend entirely on what are the needs of your business.

    Only thing I can say is ESSBASE and planning analytical solutions type applications so we shouldn't try to bring in great detail it is there is transactional system.

    Kind regards

    Sunil

  • Best practices for the integration of the Master Data Management (MDM)

    I work on the integration of MDM with Eloqua and are looking for the best approach to sync data lead/Contact changes of Eloqua in our internal MDM Hub (output only). Ideally, we would like that integration practically in real time but my findings to date suggest that there is no option. Any integration will result in a kind of calendar.

    Here are the options that we had:

    1. "Exotic" CRM integration: using internal events to capture and queue in the queue changes internal (QIP) and allows access to the queue from outside Eloqua SOAP/REST API
    2. Data export: set up a Data Export that is "expected" to run on request and exteernally annex survey via the API SOAP/REST/in bulk
    3. API in bulk: changes in voting that has happened since the previous survey through the API in bulk from Eloqua outside (not sure how this is different from the previous option)

    Two other options which may not work at all and who are potentially antimodel:

    • Cloud connector: create a campaign questioning changes to schedule and configure a connector of cloud (if possible at all) to notify MDM endpoint to query contact/lead "record" of Eloqua.
    • "Native" integration CRM (crazy): fake of a native CRM endpoint (for example, Salesforce) and use internal events and external calls to Eloqua push data into our MDM

    Issues related to the:

    1. What is the best practice for this integration?
    2. Give us an option that would give us the close integration in real-time (technically asynchronous but always / event-based reminder)? (something like the outgoing in Salesforce e-mail)
    3. What limits should consider these options? (for example API daily call, size response SOAP/REST)

    If you can, I would try to talk to Informatica...

    To imitate the integrations of native type, you use the QIP and control what activities it validated by internal events as you would with a native integration.

    You will also use the cloud api connector to allow you to set up an integration CRM (or MDM) program.

    You have fields of identification is added objects contact and account in Eloqua for their respective IDs in the MDM system and keep track of the last update of MDM with a date field.

    A task scheduled outside of Eloqua would go to a certain interval and extract the QAP changes send to MDM and pull the contacts waiting to be sent in place of the cloud connector.

    It isn't really much of anything as outgoing unfortunately use Messaging.  You can send form data shall immediately submit data to Server (it would be a bit like from collections of rule of integration running of the steps in processing of forms).

    See you soon,.

    Ben

  • Is there a best practice workflow?

    Hello

    I'm new to this first pro Capper.

    Is there a best practice for workflows?

    One of my challenges, or lack of understanding, is editing the files on the external hard drive. Is this OK to do, it causes problems or is it better to have the files on the iMac, and then when the project is completed transfer batch on the external hard drive?

    Thanks for your help!

    PrPro is a little hard on the equipment... it is seizing so many bits & pieces of the time here & there it is reading and/or rendered and all, that it can make points of congestion in the computer processing. Then... a default installation to date has been to have 4 to 6 'internal' on the computer discs, individually or in a stripped hardware RAID 0 and player programs/system/operating system. There are of course many are working on NAS (Network Attached Server) or these other means, with rather... Nice... external hardware.

    For external drives, those who are bound by (in order of preference by a throughput) were Thunderbolt, eSATA II, USB3... with up to recently the first two only really able to good use in a situation of reading/record with one exception: for the clips and 1080 and lower exports, USB3 was enough for one of the two one-way uses : media OR exports.

    Bill Gehrke ( http://ppbm7.com/index.php/tweakers-page ) and one frequently in forum Hardware ( https://forums.adobe.com/community/premiere/hardware_forum ) has tested a few discs SSD T1 Samsun lately and found capable even on USB3 a sustained flow such that there could be several parts of the workload PrPro on one project and change again very well. In addition, there are some internal readers it is used that have the speed to have whole projects on them... when connected via legal means in the computer. So I would check the Page the Hardware Forum and Tweaker for its latest hardware configurations & reader reviews for work on projects of PrPro.

    Neil

  • Best practices for backup of a VM Web server?

    This question comes from another thread, that I created, but I figure I might as well start a thread fresh since it is his own question.

    Basically, I have an environment that I inherited when I started my new job within this environment, there is a lonely VM that runs from our Web server and was created using VMware player (will go to the workstation for the snapshot feature).

    I am extremely concerned by the fact that there is no backup strategy for it, so I'm interested in finding the best way to save this virtual machine.

    Being that it's a machine virtual I instantly thought, but of course I'm open to dictate what best practices.

    A lot of what I've read so far indicates to stop or suspend the virtual machine, then copy files, however is there a way to do that without temporarily take down my Web server?  It's just that we have customers in the world that has access to the server to different time zones and I want her to be in place 24/7.

    You'll have to stop the Virtual Machine and close VMware Player and uninstall VMware Player to install VMware Workstation and it is the time where you should make a backup copy of the master of the Virtual Machine.  Once installed VMware Workstation and the Virtual Machine is back and then runs to reduce the minimum time for subsequent backups you necessarily do not do such proposed by Richardson Porto although I don't disagree with what he says.

    In theory anyway, and in practice, when you want to backup you would take a cliché Hot and then copy the Parent disk such that it is now read-only and the Virtual Machine is still running while you make a copy of the Parent disk.  Once you have copied the Parent disk, you can then remove the snapshot thus merging Delta and drive Parent and now ready for the next time you want to back up the drive.  The duration of that Virtual Machine is interrupted during the snapshot warm and hot snapshot delete should be considerably less and then stop and restart the virtual machine.  The reason why I said "in theory" because while it is supposed to work in practice, nevertheless if you correctly shutdown the guest operating system before capture instant, so you do not rely on VMware Workstation to make sure that everything works all in being hot.  Not that you should have problems, it's hot, however you remove a layer of the process that has the potential of being wrong compared to when is cold, even if it is cold takes a little longer.

    In both cases, I would avoid cloning because it changes the UUID and MAC address of the clone of the parent company and by creating a second occurrence technically need a second license for the Windows operating system that is installed on the cloned virtual machine.  In other words an archived copy complete or exact unedited from the original is a legitimate backup a clone of IMO is a second occurrence but it has a different UUID and MAC address.  It would be like taking a copy of the operating system with a single license and install on two different physical computers.  Also by doing as I have suggested and make a backup copy of Master of the Virtual Machine and then just save the Parent read-only disks disk hard virtual are themselves not a computer virtual and just that copy a backup, however can be used to restore the same Virtual Machine driven and not having not any changes in the UUID MAC addresses etc.

    Finally, you should look into what you are running in the guest operating system and how to properly save Web Site and database, or user data etc in the guest operating system.  In other words, you should always have a copy backup of the virtual machine and everything that is necessary in the guest operating system to be able to restore it without necessarily restore all of the Virtual Machine.  IMO it's better to have several and backup types to have more options in a recovery scenario!

Maybe you are looking for