Swap Partition - best practices

Hi all

What is considered a best practice to create the swap partition?

ESX 4 is based on Red Hat Enterprise Linux Server version 5.1,

I searched their knowledge base and found the following article:

Q. If I add several hundred GB of RAM for a system should I really more

hundreds of GB of swap space? (kbase.redhat.com/faq/docs/DOC-15252)

A. short answer:

• Systems with 4 GB of ram or less require a minimum of 2 GB of swap space

• Systems with 4 GB to 16 GB of ram require a minimum of 4 GB of swap space

• Systems with 16 GB to 64 GB of ram require a minimum of 8 GB of swap space

• Systems with 64 GB, 256 GB of ram require a minimum of 16 GB of swap space

Thanks in advance,

Ronen.

1.6 GB as the Service Console is limited to 800 MB of memory.

Duncan

VMware communities user moderator | VCP | VCDX

-

Tags: VMware

Similar Questions

  • Best practices of a partition of HDD on Windowes Server2008 and Windowes 2012

    What is the best practice for partition of HDD on Windowes Server 2008 and 2012 Windowes

    Could be interesting to ask more http://social.technet.microsoft.com/Forums/en-US/winservergen/threads that you can get more answers.

    That said I would say that very much depends on what you intend to do with the server, and of course how much space you have available.

    For a general server I would probably go with two volumes, the C:\ drive for the file system and a second volume for your data, for example E:\. I recommend at LEAST 30 GB for the C:\, 40-50 GB preference, since updates, patches, etc will burn way over time and it's much easier to start big than to try to develop later.

    If you are running the Terminal Services, then you will probably need a larger C:\ amount as a large part of the user profile data is stored there in order to run lack of space fairly easily.

    As I said, it depends on what you do, how much space and disks you have available etc, it isn't really a one-size-fits-all answers.

  • Best practices for the Vm disk partitioning

    I want to create a server of Windows 2012 by c:\ and d:\ partitions what the best practice it is btw SQL database server. Should I create a hd c: and then add another hd d:\ all on the same data store. I read that the use of applications and partitioning tools is more metal applications for real and to create a virtual machine two separate readers would be too easy and better recovery or future expansion on the drive.

    Thank you

    Mike

    Note: Discussion moved successfully to vSphere SDK comments on Virtual Machine & Guest OS
    Yes, create a separate virtual disk (VMDK) for each partition or volume under Windows. If the workload is light - to begin with, you can leave all VMDK in working area of the virtual machine and then if you need to divide the discs in different data stores you can do it later.
    What can talk you about your workload and virtual infrastructure you have - you what version of vSphere are registered for the?
  • Best practices: multiple partitions on a single vmdk or partition by vmdk

    Hello all-

    I would like to get your opinions on the best practices for the vmdk file server installation program.

    The drive C partition would be allotted for the operating system, while E, F... to store the data of the partitions.

    configuration 1:

    vmdk1 = thick disk of provisioned by a partition of drive c.

    vmdk2 = thickness accommodation provisioned disk partitions E, f...

    Installer 2

    vmdk1 = thick disk of provisioned by a C partition

    vmdk2 = thick disk of provisioned by a partition E

    vmdk3 = thick disk of provisioned by a partition F

    .......

    Also the partitions of multiple data configured as independent + permanent virtual disks due to snapshots. My logic is that OS (C drive) is used for snapshots in test of new software for example while the data partitions act as the storage disks that need to keep the most recent files regardless of the return to an older snapshot. BTW data partitions regularly are Word, excel, photos and so on.

    also, I realize that I could have a single example, E: data with several shared folders partition, but given that each folder is for another Department could cause more trouble when space more and more in the future. Great VMDK could take more time to develop. Not sure again.

    Thank you

    Hello

    in general, virtualization does not change much on the disk IO.

    You can use the same rules that you would use to size a physical server.

    Multiple vmdk mean multiple targets for your I/o load.

    Best solution if IO load/troughput high or low response time should be reached, you create multiple VMDK and spread over several data stores.

    HtH

  • Computer configuration virtual for SRM - files from memory Swap file/Virtual - best practice replication?

    Hello, I am very new to the model DR VMWare and have a few persistent questinos.

    What is the recommended best practice re: virtual computers invited windows and virtual memory files.  I think it would be unnecessary to replicate these changing data on the DR site.  I have several virtual machines with 6-8gig memory and I'm just wondering how can I isolate the ESX memory swap files and virtual memory in Windows feedback, so they get not replicated on the Dr site as often as changing os/data.  If it's even necessary to replicate.

    We use vspere vcenter 4.1 related modes, RS 4.1 the two on-site Cellera NX4 ReplicatorV2.

    I wonder if I can install the files of pages to be in a condition NOT replicated file system or a file system which is the only value replicates every 24 hours, or once a week.

    My reasoning here is.  Once the file is there, the operating system really cares about the changes in this file and dumps windows virtual memory on a restart, as well as the .vswp file.

    what I mean is built on a virtual disk full and placing the windows file on this virtual disk to Exchange.  This virtual disk could exist in a data store hosted on a celerra file system that does not get repeated as frequently as data/OS file systems.

    or am I completely off base here.

    I think that if you tried to reproduce these files and maintain a sync time minute 10, you would need a ton of bandwidth.

    any suggestions or recommendations or even pointers to Articles/items worth points = D.

    Thanks for your time.

    Given that the configuration is made at the level of the cluster and has no impact on the SRM, replication is not a factor.  Don't forget, not to replicate the LUNS with the swap, just files have a LUN configuration on the side of recovery as well and have this defined cluster Exchange files.  Also, on the virtual machine critical we define reservations memory so the pagefile is not a factor at all.

    Kind regards...

    Jamie

    If you found this information useful, please consider awarding points to 'Correct' or 'useful '.

    Remember, if it isn't one thing, it's your mother...

  • common components of jar, naming, best practices

    11.1.2.1 forms

    I added a few components of pot. For example, Directprint.jar

    As they have dependencies on other common components, you must download and add in the formsweb.cfg

    For example, Directprint must fontbox, pdfbox and commons-logging

    This components are alive, so change their versions.

    Well, now I have java in 1.6-1.8 guests, so I used it to upgrade some components of the jar.

    For example from pdfbox - 1.8.6.jar to pdfbox - 1.8.11.jar

    The question is: what is the best practice? The version of the name of band (rename just pdfbox.jar) so I don't have to change formsweb.cfg or preserve the name and line archive file formsweb.cfg?

    If you rename the file... you have to do before everything that manifests properties adding and signing a hassle?

    I guess it's a matter of taste and requirement. Personally I don't like the version numbers in the file names, because ultimately I don't like what version I am running a certain file... as long at it works. I certainly do not want to change the file name in each config file I do and do not remember to add it.

    However; If you want to be able to move forward and backward between the versions I guess it's easier to have a version number in the file name. If you encounter a no - Go in the current version just swap the file name in the configuration files. Of course, if you implement version control, you can update to revision until the problem is introduced, generating the .jar file from scratch and redeployed. If you don't have version control, you can only hope that you have a backup of your file.

    By simply adding a version number in the file name, you will need to update the formsweb.cfg whenever you add a new version of your jar file. By not doing this you will not have to do.

    Both have their advantages and disadvantages, you will have to decide for yourself what best suits your needs.

    If you rename the file... you have to do before everything that manifests properties adding and signing a hassle?

    Yes. The manifest must not care about the file name, but the process of signing only.

    see you soon

  • Is have RAID (1 + 0) so that Failgroups for all ASM ASM starts is a well-known best practice?

    Dear Experts,

    Is have RAID (1 + 0) so that Failgroups for all ASM ASM starts is a well-known best practice?

    Having both is the best practice

    • RAID (1 + 0)
    • ASM Failgroups for all ASM starts

    Thank you

    IVW

    You can create groups of ASM disks with normal or high redundancy or specify external redundancy. It depends on your reliability requirements and storage performance. Remember that ASM is not RAID and redundancy is the base file. ASM uses alternating devices. Oracle is generally recommended to have redundancy of storage. If you have RAID redundancy at the level of hardware controller or storage, Oracle recommends to configure ASM disks with external redundancy groups. Redundancy of the DSO is only between groups of disk failure and by default, each device's own failure group. Of course, you always want to make sure that you do not rely on redundancy of data between logical devices or partitions residing on a single physical unit. Compared to external RAID redundancy redundancy ASM using will give the DBA more control and transparency on the setting of the underlying.

  • New to ESXi ESXi installation USB or local disks operation best practices?

    I'm new to Vmware and run a small store, what is the best practice or best method to install the OS ESXi.  I currently have a few that I have installed on the usb stick on the server Board.  After some research it would be better to have two small drives SSD that I can raid with the operating system, then another RAID for the VM data store.  USB is a single source of failure.

    Thank you

    Mike

    Hello

    Having the internal hard drives in RAID1 for the o/s will certainly to avoid a single point of failure, as you pointed out correctly. At present the death of your USB key, your host problems quite quickly, and you will need to get a new one and re-install again. You could save your good host configuration and realistically - it does take too long to rebuild a crowd if he dies. Lose other stuff like network configurations and others would however be a pain!

    I think I have two SSD internal drives in RAID1 for the o/s is probably overkill. You will have an advantage any speed of startup, but realistically most of the servers restart everything often and once that ESXi is in place and operational it is very little activity on the disks, a config updates every so often and so on. I'd be inclined to use a SSD to create a Cache of the host for the swap drive, like that you can actually use the SSD and get more performance for your money.

    Many manufacturers (like Dell) use internal SD cards in RAID1. While SD cards are not known to be very robust, because of the congestion of ESXi and the minimum number of necessary paperwork once installed initially, it makes a less expensive alternative to business class for the o/s disks.

    In regards to your data warehouses, having an internal RAID your local disks is best if you use a stand-alone host with no storage attached to the network. You always have the problem of failure of the host if.

    See you soon,.

    Ryan

  • What is the best practice to block through several layers sizes: hardware and hypervisor VM OS?

    The example below is not a real Setup, I work with, but should get the message. Here's my example of what I'm doing as a reference layer:

    (LAYER1) Hardware: The hardware RAID controller

    • -1 TB Volume configured in the 4K block size. (RAW)?


    (Layer2) Hypervisor: Data store ESXi

    • -1 TB of Raid Controller formatted with VMFS5 @ block size of 1 MB.


    Layer (3) the VM OS: Server 2008 R2 w/SQL

    • -100 GB virtual HD using NTFS @ 4 K for the OS block size.
    • -900 GB virtual HD set up using NTFS @ 64 K block size to store the SQL database.

    It seems that vmfs5 is limited to only having a block size of 1 MB. It would be preferable that all or part of the size of the blocks matched on different layers and why or why not? What are the different block sizes on other layers and performance? Could you suggest better alternative or best practices for the sample configuration above?

    If a San participated instead of a hardware on the host computer RAID controller, it would be better to store the vmdk of OS on the VMFS5 data store and create an iSCSI separated THAT LUN formatted to a block size of 64 K, then fix it with the initiator iSCSI in the operating system and which size to 64 K. The corresponding block sizes through layers increase performance or is it advisable? Any help answer and/or explaining best practices is greatly appreciated.

    itsolution,

    Thanks for the helpful response points.  I wrote a blog about this which I hope will help:

    Alignment of partition and blocks of size VMware 5 | blog.jgriffiths.org

    To answer your questions here, will:

    I have 1 TB of space (around) and create two Virutal Drives.

    Virtual Drive 1-10GB - to use for OS Hyper-visiere files

    Virtual Drive 2 - 990 GB - used for the storage of data/VM VMFS store

    The element size of default allocation on the Perc6 / i is 64 KB, but can be 8,16,32,64,128,256,512 or 1024 KB.

    What size block would you use table 1, which is where the real hyper-visiere will be installed?

    -> If you have two tables I would set the size of the block on the table of the hypervisor to 8 KB

    What block size that you use in table 2, which will be used as the VM data store in ESXi?

    ->, I'd go with 1024KO on VMFS 5 size

    -Do you want 1024KO to match the VMFS size which will be finally formatted on top he block?

    -> Yes

    * Consider that this database would eventually contain several virtual hard drives for each OS, database SQL, SQL logs formatted to NTFS to the recommended block, 4K, 8K, 64K size.

    -> The problem here is THAT VMFS will go with 1 MB, no matter what you're doing so sculpture located lower in the RAID will cause no problems but does not help either.  You have 4 k sectors on the disk.  RAID 1 MB, 1 MB invited VMFS, 4 k, 8K, 64 K.   Really, 64K gains are lost a little when the back-end storage is 1 MB.

    If the RAID stripe element size is set to 1 024 Ko so that it matches the VMFS 1 MB size of block, which would be better practice or is it indifferent?

    -> So that's 1024KB, or 4 KB chucks it doesn't really matter.

    What effect this has on the OS/Virtual HD and their sizes respective block installed on top of the tape and the size of block VMFS element?

    -> The effect is minimal on the performance but that exists.   It would be a lie to say that he didn't.

    I could be completely on the overall situation of the thought, but for me it seems that this must be some kind of correlation between the three different "layers" as I call it and a best practice in service.

    Hope that helps.  I'll tell you I ran block size SQL and Exchange time virtualized without any problem and without changing the operating system.  I just stuck with the standard size of microsoft.  I'd be much more concerned by the performance of the raid on your server controller.  They continue to do these things cheaper and cheaper with cache less and less.  If performance is the primary concern then I would consider a matrix or a RAID5/6 solution, or at least look at the amount of cache on your raid controller (reading is normally essential to the database)

    Just my two cents.

    Let me know if you have any additional questions.

    Thank you

    J

  • vSphere 5 ESXi host - RAID config best practice?

    Hi all

    I'm pretty new to VMWare and wonder what's the best Setup for RAID

    I have a HP DL380 G7 with SAS 4 x 600 GB discs.

    It is a simple server lab/DR, and I intend on creating a RAID-5 volume with no hostspare. Can I partition the RAID and put the ESX software on a small partition and use the other partition for the data store? If possible it is best practice?

    I imagine there will be a performance by having the same RAID volume, making of the reads/writes on the ESX software and storage at the same time. Would I be better to throw some 72 GB in RAID 1 on the server so just for the ESX software and maybe to store some ISOs?

    Thank you!

    B

    Don't worry ESXi. It works entirely in memory and only stores the configuration changes and logs on the HARD drive. I'd go with the RAID5 configuration that you mentioned.

    André

  • AWM best practices 11g

    I will build a cube in AWM 11 g. It has 25 dimensions and some of the dimension tables has over 30 million records. I partitioned the cube on the dimension of time with the lowest level being months. One dimension took about 20 hours to build. I am now under the cube and makes more than 30 hours and its construction yet. I don't know what needs to be done to improve performance and just best practices in general. Thank you in advance. What is the recommended number of dimension to be used in the cube and are there recommendations as to how attributes affect the dim or hierarchies or how many measures to include in the cube. These cubes using OBIEE as the reporting tool.

    Thank you in advance.

    Hello

    25 dimensions are really much if you say that each cube is sized by each of them. With Oracle OLAP, you do not have these limits for a number of dimensions of a cube, but a typical good efficient cube can have 5 to 8 dimensions. If you analyze your business needs, you might be able to create smaller cubes much keep each cube to have 5 to 8 dimensions so that the loading process is much faster and suffocate.

    How many members you load dimension which takes 20 hours? You load a view or table? Also if you do a complete refresh of dimesion with synchronization and the cube is also loaded it takes a while to load but still 20 hours indicates that something is wrong here.

    Cube load performance depends on several facts:

    1 percent to precalculate.

    2. the cube dimension. How many members in each a dimension.

    3. What is the depth of the predetermined.

    4. how many measures you have in each cube.

    5. is partitioned cube? If Yes, what level, you need to maybe resolve and arrive at a correct level for the partition. How partititions.

    6. If your cubes are not partitioned, then you make series load that is eating all your time building.

    More you pre-computes, load cube takes more time, AW takes more space on the disk. A cube performance has good combination of precompute so that charge takes in your load limit and also query does not suffer.

    You can get help here

    http://oracleolap.blogspot.de/

    Oracle OLAP: Best practices

    Thank you

    Brijesh

  • NetApp Best Practice and independent labels

    Hi, Best practices for VMware NetApp recommends, transitional and temporary data such as comments

    pagefile operating system, temporary files and swap files, must be moved to another disk virtual one

    different data store as snapshots of this type of data can consume a large amount of storage in a very short time



    high time due to the rate of change (that is, to create a data store dedicated to transitional and temporary for all VMS data without other types of data or VMDK residing on it).

    NetApp recommends also configure the VMDK residing in these stores data as "Independent persistent" disks in vCenter. Once configured, the transitional and temporary data VMDK will be excluded from the VMware vCenter snapshot and copy snapshot of NetApp initiated by SnapManager for Virtual Infrastructure.

    I would like to understand the impact of this best practice - can anyone advise on the following:

    • If the above is implemented:

      • Snapshots will work via vcenter?

      • Snapshots will work via the Netapp Snapmanager tool?

      • The snapsot includes all of VM disks? If this is not the case, what is the consequence of not having the whole picture of the VM?

      • The snapshot of vcenter can restore ok?

      • Netapp snapshot can restore ok?

    • What impact the foregoing has on the process of return if using a backup product that relies on snapshot technology?

    Thank you





    Hi Joe

    These recommendations is purely to save storage space when the replication or backup.

    For example, you can move your *.vswap (VM swap file) file to a different data store. NetBackup can do instant IVMS of the warehouses of data and with this configuration, you can exclude this particular data store

    This is also true if you create a data store dedicated for OS Swap files, mark independent so that vCenter not relieve these VMDK.

    I did a project with NetApp on boxes of SAP production

    We moved all the files in *.vswap to warehouses of data created and dedicated RDM for the OS Swap locations

    We actually used the SnapDrive one NetApp technology to suspend the DB SQL on the ROW before the ROW is broken, but I won't go into too much detail

    To answer your questions (see the comments in the quote)

    joeflint wrote:

    • If the above is implemented:
      • Snapshots will work via vcenter? -Yes it will be - independent drive gets ignored
      • Snapshots will work via the Netapp Snapmanager tool? -Yes it will be - snaps the entire data store/LUN
      • The snapsot includes all of VM disks? If this is not the case, what is the consequence of not having the whole picture of the VM? -No. - *.vswap file is created when the VM is started (no need to backup)

    -OS Swap VMDK of location must be re-created in the case of restoration. WIndows will be

    always Prime if the Swap disk is missing, and you specify the new location of swap.

    • What impact the foregoing has on the process of return if using a backup product that relies on snapshot technology? -These backup products use vCenter snapshots and because the vCenter snapshots works 100% it shouldn't be a problem.

    It may be useful

    Please allow points if

  • AWM - cubes: best practices

    All,

    I'm working on a development of the cube AWM and need help with some best practices on the design of these cubes.

    Thank you

    40 dimensions sounds too. Not that it can be treated by OLAP.

    Often the two things times are mistaken as a dimension:
    (1) an attribute from a dimension
    (2) a hierarchy to a dimension

    Make sure that you understand when to add an attribute to a dimension AND when to add a hierarchy to a dimension, instead of doing these two things as separate dimensions.
    A dimension can have several attributes and hierarchies.

    Start with your reporting requirements and then to determine how stored and hierarchies of attributes, dimensions, cubes, measures are needed to support these reports. Do NOT ENTREPOSEZ to what you can calculate, using calculated measures. All OLAP engines are very efficient calculation engine.

    Another point to keep in mind. You can (and should) create several cubes stored with less dimensionality to each of them. Then, you can create a cube of reports that is dimensioned by all dimensions and 'join' all data cubes stored in that cube of reports using calculated measures. So in the reporting cube you NOT all stored measures, ONLY calculated measures.

    Partition your cubes stored in the time dimension. Start a MONTH level partitioning. You can always change it later. NOTE that this has nothing to do with the Option of partitioning of relational database. Partitioning of OLAP cube takes a logical cube and creates several cubes "behind the scenes" (you will not see in AWM gui) as well as several CPUS can be used to load the data of the cube at the same time.

    Initially create all your stored cubes as compressed cubes and the value of all dimensions as "Sparse".

    While OLAP and OBIEE can manage Parent-child hierarchies, I found that OBIEE works well with the level. So, if there is any parent-child hierarchies, then convert those to 'Ragged-based on the level' (NOT balanced with level). Avoid non-hierarchical hierarchies, as OBIEE generates too much (backstage) when requests further you on the SKIP level hierarchies.

  • Best practices ESXi Install/Config

    Hi all, first of all I am noob VMware, my apologies. I tested Esxi internal 4 in recent months. Without going into details, I would like to adopt the solution Dr. Esxi 4. We are a small company with a large parent company that can provide the space for the equipment and the employee must strike DR.

    I would buy a sever adaquite to deploy three virtual servers which would be W2k3 or W2k8. One would be a file server / print, the other an Exchange 2007 server and the last would be a server terminal server. Looking for a guide or a best practice document that talks about the allocation of memory/recommendations, equipment practices, sharing practices, etc... For example I want to go with a configuration Raid 5 vs Raid 1 or Raid 1 + 0. During the installation of w2k3 would I created a partition c:\ and d:\... Etc...

    If any of you have recommendations or guidelines that you can share with me, it would be greatly appreciated.  Thanks in advance to all!

    If it is strictly intended to DR, I suppose that high performance is not what you need in the first place.

    Then I'd go to a server with 1 CPU Quad Core 2.5 GHz or more, at least 8 GB RAM 2x300GB SAS HDD as RAID1, make sure you have a RAID with battery buffered controller cache write (otherwise you won't be very happy). Make sure that the server and the components are the VMware HCL

    The servers will run probably with 2 GB of RAM, but ESXi will have too little RAM.

    Configure the virtual machine as you would with physical HW.

    For Windows 2003, you can partition alignment.

    André

    EDIT: Please use only 1 vCPU per VM.

  • / var/log is full. Best practices?

    One of the score of the newspaper of our host is 100% full. I'm not the practice administrator for this host, but manage/deploy the virtual machines it for others to use.

    I was wondering what's the best practice to deal with a more complete log partition? I found an article that mentioned editing the file /etc/logrotate.d/vmkernel/ so that files

    be compressed more often and saved for less often, but there was no real clear instructions on what to change and how.

    Is the only way to investigate on the console itself or the directory/var/log via putty? No there is no way to see VIC?

    Thank you

    Hello

    To solve the immediate problem, I would transfer to any newspaper in/var/log with a number at the end is dire.1,.2, etc. to a temporary storage outside the ESX host location. You could run something similar to the following command of the scp to do:

    scp /var/log/*.[0-9]* /var/log/*/*.[0-9]* host:TemporaryDir
    

    Or you can use winscp to transfer of the ESX host in a windows box. A you get the files from existing logs from the system for later playback, use the following to clear the space:

    cd /var/log; rm *.[0-9]* */*.[0-9]*
    

    I would therefore consist logrotation thus directed by hardening for VMware ESX.

    Best regards, Edward L. Haletky VMware communities user moderator, VMware vExpert 2009
    "Now available on Rough Cuts: url = http://www.astroarch.com/wiki/index.php/VMware_Virtual_Infrastructure_Security' VMware vSphere (TM) and Virtual Infrastructure Security: ESX security and virtual environment ' [url]
    Also available url = http://www.astroarch.com/wiki/index.php/VMWare_ESX_Server_in_the_Enterprise"VMWare ESX Server in the enterprise" [url]
    [url =http://www.astroarch.com/wiki/index.php/Blog_Roll] SearchVMware Pro [url] | URL = http://www.astroarch.com/blog Blue Gears [url] | URL = http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links Top security virtualization [url] links | URL = http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast Virtualization Security Table round Podcast [url]

Maybe you are looking for