Create a custom attribute to model the growth rate of the data store used space

I would like to see data warehouses are growing fastest of the environment in terms of GB per day or something similar.  Is it possible to use a custom attribute to apply a formula that measures the rate of growth of a store of data over time, which allows me to see what data stores fastest?

You would look for an average simple mobile for a SM you describe. vC Ops does not simple moving average formulas for SMs at this time.

Looks like you are looking for a "growth rate", but you really won't get it with the built-in attributes. You might look differently and look at the remaining capacity and the remaining time metric. Also, you could create additional widgets to view graphs of util % over the last 30-90 days to give a better picture of usage in GB, #VMs, etc.

I also suggest to make a Top - N showing use the lowest of the remaining time metric and the remaining ability... This will give you your data warehouses that are more at risk.

Tags: VMware

Similar Questions

  • local storage in the data store

    I have 4 ESXi 5.5 cluster nodes (host1, 2... 5) + 3par storage, I noticed that the local drive of host2 is detected as a data store.

    When I create a VM and I choose the data store, I can see all data related to my on3par LUN stores but it seems this drive local host2 too!

    I would like to remove it! Can you offer me please?

    You must hide hide this data store using permissions, you must run the following procedure: Open-> inventory-> warehouses vSphere-> select the select data-> permissions VMFS store-> click the user or group in the list and set them to 'No access' - are not propagated.

    Once achieve you its judgment showing everywhere.

  • VMX file larger than the maximum size supported by the data store

    Hi all

    I seem to be getting a strange error trying to a virtual machine that is located on a snapshot VMFS5, to the data store format.

    The virtual machine (VMDK and VMX file) is installed on a thin provisioned 2 TB datastore which has the value VMFS5 formatting.  It is a data store new with no other VM on it.  I had problems in the past with the datastore block sizes (1 MB, 2 MB, 4 MB, 8 MB on VMFS3), however, I thought that VMFS5 got rid of the question, since it's all blocks of 1 MB and all files in the virtual computer are on the same data store.

    The error I get indicates that the VMX file is too large for the data store (used Im saying "unspecified file), however, if I do the independent 2 TB disk snapshots through the fine.  The virtual machine has two virtual disks, a 50 VMDK of GB and a thin 2 TB VMDK (200 GB used).  I tried fast vMotioning the VMDK and VMX to other data formatted VMFS5 stores and still get the same error.

    I'd appreciate any help and ideas.

    Thank you

    Travis

    Although the block size limit is party, maximum 2 TB file size less 512octets still exists. Since a Flash disk can grow waist put in service the basic disk more overhead for the metadata, the maximum size of virtual disk - to be able to create snapshots - is 2032 GB.

    See "calculation time system required by the snapshot files" at http://kb.vmware.com/kb/1012384

    André

  • Custom attribute OAM in the Workflow Manager user

    I need to add a custom attribute (defined in the person object class) to the User Manager workflow. I'm administrator of Master and I could see objectclass and attribute in the current configuration-> tab objectclass.
    But when I try to install 'Attribute Access Control', I don't see a custom attribute in the list.
    can you please how to make available in the workflow custom attribute?

    Thank you

    If you use OID, check if the attribute is indexed.

  • call a stored procedure for each row in the transitional attribute and display the data in the form of af: table. The other rows are based on the entities

    Hi Experts,

    JDeveloper 12.1.3.0.0

    I have a VO based on entity object. With a column of the VO is transient attribute (I created).

    I need to call a stored procedure for each row in the transitional attribute and display the data in the form of af: table. As well as other attributes.

    So can anyone suggest how can I achieve this?

    Thank you

    AR

    I think that you need a stored function (which returns the value) in this case, is not?

    Take a look at:

    https://docs.Oracle.com/CD/B31017_01/Web.1013/b25947/bcadvgen005.htm

    and search for:

    Invoking stored function with only Arguments in

    call your function in the Get attribute and return value accessor...

  • 0 blocks free PTR - cannot create new files on the data store

    We have been experiencing problems trying to power on virtual machines. When attempting to power on virtual machines, we see the error "cannot extend the pagefile from 0 KB to 2097152 KB".

    We checked the .vswp file are created in the folder of the Virtual Machine on the data store. Connection to the ESXi host, we have seen the following in vmkernel.log error message:

    (2016 01-16 T 21: 19:40.556Z cpu1:4971732) WARNING: Res3: 6984: "freenas-6-ds": [rt 3] No. Space - has not found enough resources after the second pass! (requis_:_1,_trouvé_:_0) 2016-01 - 16 T 21: 19:40.556Z cpu1:4971732) Res3: 6985: "freenas-6-ds": [rt 3] resources t 0, e 0, PN 16, BM 0, b 0, RCs u 0, i 0, 4031 nf, pe 0, 0 2016-01-16 T 21 oe: 19:40.556Z cpu1:4971732) WARNING: SwapExtend: 683: impossible to extend the pagefile from 0 KB to 2097152 KB.

    This was surprising given that we have about 14 TB of space available on the data store:

    [root@clueless:~] df h

    Size of filesystem used available use % mounted on

    VMFS-5 20.0 T 5.4 T 14.6 T/vmfs/volumes/freenas-six-ds 27%

    However, when we use "dd" to write a 20 GB file, we would get "no space left on device:

    [root@clueless:/vmfs/volumes/55a00d31-3dc0f02c-9803-025056000040/deleteme] dd if = / dev/urandom of = deleteme bs = 1024 count = 2024000

    DD: writing "deleteme": no space is available on the device

    263734 + 0 records in

    out 263733 + 0 reviews

    [root@clueless:/vmfs/volumes/55a00d31-3dc0f02c-9803-025056000040/deleteme] ls - lh deleteme

    -rw - r - r - 1 root root 19 Jan 255,1 M 01:02 deleteme

    We checked that we have free inodes:

    The ramdisk name system include in reserved Coredumps used Maximum reserved free use pic free maximum allocated Inodes used Inodes Inodes Mount Point

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    root of true true 32768 KiB 32768 KiB KiB KiB 99% 99% 9472 4096 3575 176 176.

    true true etc 28672 KiB 28672 KiB 284 KiB 320 KiB 99% 99% 4096 1024 516/etc

    Choose true true 0 KiB KiB 0 KiB KiB 0 100% 0% 8 1024 8192 32768 / opt

    var true true 5120 KiB 49152 484 516 99% 90% 8192 384 379 KiB KiB KiB / var

    tmp false false 2048 KiB 262144 KiB 20 KiB 360 KiB 99% 99% 8 256 8192/tmp

    false false hostdstats KiB 310272 KiB 3076 KiB 3076 KiB 99 0% 0% 8192 32 5/var/lib/vmware/hostd/stats


    We believe that our cause is due to have 0 free blocks of PTR:

    [root@clueless:/vmfs/volumes/55a00d31-3dc0f02c-9803-025056000040] vmkfstools Pei - v 10/vmfs/volumes/freenas-six-ds.

    System file VMFS-5, 61 extending on 1 partition.

    File system label (if applicable): freenas-six-ds

    Mode: public TTY only

    Capacity 21989964120064 (blocks of files 20971264 * 1048576), 16008529051648 (15266923 blocks) prevail, max supported size of the 69201586814976 file

    Volume creation time: Fri Jul 10 18:21:37 2015

    Files (max / free): 130000/119680

    Blocks of PTR (max / free): 64512/0

    Void / blocks (max / free): 32000/28323

    The secondary blocks of Ptr (max / free): 256/256

    Drop blocks (approve/used/approve %): 0/5704341/0

    Blocks of PTR (approve/used/approve %): 64512/0/0

    Void / blocks (approve/used/approve %): 3677/0/0

    Size of volume metadata: 911048704

    UUID: 55a00d31-3dc0f02c-9803-025056000040

    Logical unit: 55a00d30-985bb532-BOI.30-025056000040

    Partitions split (on 'lvm'):

    NAA.6589cfc0000006f3a584e7c8e67a8ddd:1

    Instant native is Capable: YES

    OBJLIB-LIB: ObjLib cleaned.

    WORKER: asyncOps = 0 maxActiveOps = 0 maxPending = 0 maxCompleted = 0

    When we turn off a virtual machine, it will release 1 block of PTR and we would be able to on another VM / create the 20 GB file using "dd". Once we reached 0 free blocks of PTR, we are unable to create new files.

    Can anyone give any suggestions on how we may be able to clear the blocks PTR? We have already tried to restart all services of management on all ESXi hosts connected.

    FreeNAS is not running on a virtual machine.

    We solved the problem by finding a lot PTR blocks have been used by many of our models of virtual machine. Remove the disk models solved the problem.

  • How to associate the data store of the target for the newly created using the API Interface

    How to create a new Interface under project, need to associate the data store target for mappings for the interface by using APIs "."

    Able to get the associated interface created temporary data store. You need to associate a new database model. How to do this using the API

    My code is,

    String pCode = "DEVELOPMENT";
    Context OdiContext = (mgr.getFinder (OdiContext.class)) .findByCode (pCode) (IOdiContextFinder);
    System.out.println (Context.getLastDate ());

    OdiInterface pInterface = new OdiInterface ("toDeleted_Interface", pFolder, context);
    pFolder.addInterface (pInterface);

    The list of ds < DataSet > = pInterface.getDataSets ();
    < DataSet > iterator itr = ds.iterator ();
    DataSet ds_nxt = itr.next ();

    PAlias string = "HRA_TMPL_DEFNS_TL";
    pOrder int = 0;
    OdiModel pModel = (mgr.getFinder (OdiModel.class)) .findByCode ("FILE_PM_MODEL") (IOdiModelFinder);
    String pName = "HRA_TMPL_DEFNS_TL";
    OdiDataStore pUnderlyingOdiDataStore = new OdiDataStore (pModel, pName);
    SourceDataStore pSourceDataStore = new SourceDataStore (ds_nxt, false, pAlias, pOrder, pUnderlyingOdiDataStore);
    ds_nxt.addSourceDataStore (pSourceDataStore);

    TargetDataStore tdata = pInterface.getTargetDataStore ();

    If (tdata.isTemporaryDataStore ())
    {
    }

    http://odiexperts.com/creating-interface-for-single-source-and-target/
    http://odiexperts.com/creating-temporary-interface-using-ODI-SDK/

  • Cannot create the data store

    Hello!

    I installed ESX 3.5 update 4 with success, but I met following problem: I can not create the data store!

    Customer VI can't see my hard drive!

    Once the connection directly to the ESX Server using ssh, we see:

    # esxcfg-vmhbadevs

    #            # empty!!!

    # df h

    Size of filesystem used Avail use % mounted on

    / dev/hde2 3.5 G 1.4 G 2.0 G 41%.

    / dev/hde6 24% 19 M 61 M 84 M/Boot

    No 124M 124M 0 0% / dev/shm

    / dev/hde7 525M 20 M 479M 4% / var/log

    # vdf h

    Size of filesystem used Avail use % mounted on

    / dev/hde2 3.5 G 1.4 G 2.0 G 41%.

    / dev/hde6 24% 19 M 61 M 84 M/Boot

    No 124M 124M 0 0% / dev/shm

    / dev/hde7 525M 20 M 479M 4% / var/log

    devices/vmfs / 33M 33M 0 0% / vmfs/devices

    # fdisk-l

    Disk/dev/hdk: 200.0 GB, 200049647616 bytes

    255 heads, 63 sectors/track, 24321 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    FB/dev/hdk1 1 16709 134215011 unknown

    FB unknown/dev/hdk2 16710 24321 61143328 +.

    Disk/dev/hdi: 1500,3 GB, 1500301910016 bytes

    255 heads, 63 sectors/track, 182401 bottles

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    /dev/hdi1 * 1 182401 1465135968 unknown fb +.

    Disk/dev/hdg: 1500,3 GB, 1500301910016 bytes

    255 heads, 63 sectors/track, 182401 bottles

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    /dev/hdg1 * 1 182401 1465135968 unknown fb +.

    Disk/dev/hde: 500,1 GB, 500107862016 bytes

    255 heads, 63 sectors/track, 60801 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    / dev/hde1 1 96 763904 5 extended

    Partition 1 does not stop the cylinder limit.

    / dev/hde2 97 548 3630690 83 Linux

    FB/dev/hde3 618 60802 483425304 unknown

    / dev/HDE4 549 617 554242 82 Linux swap +.

    / dev/13 27 fc 112624 unknown hde5

    / dev/hde6 * 2 12 88326 83 Linux

    / dev/546178 95 28 hde7 83 Linux +.

    Partition table entries are not in the order of disc

    Disk/dev/hdc: 1500,3 GB, 1500301910016 bytes

    255 heads, 63 sectors/track, 182401 bottles

    Units = cylinders of 16065 * 512 = 8225280 bytes

    + Device boot start end blocks Id system.

    / dev/hdc1 43932 182401 1112260275 7 HPFS/NTFS

    /dev/hdc2 * 1-11252-90381658 7 HPFS / NTFS +.

    unknown/dev/hdc3 11253 43931 262494061 FB +.

    Partition table entries are not in the order of disc

    #

    The partitions are present, but vmware won't do an analysis.

    Is there a way to understand what happened?

    My setup

    motherboard: Tyan Thunder n3600m S2932GRN

    disks: SATA seagate and WD.

    NIck

    I see the barebone specification and seem that your controller is a 'soft' RAID

    Take a look at the configuration of the SATA controller.

    When your drive will be recognized as/dev/sdXX, then you can create your data store.

    André

    * If you found this device or any other answer useful please consider awarding points for correct or helpful answers

  • How to create the new virtual machine vm existing in the data store on ESXI

    I have ESXi and couple of virtual machines.

    I have the vsphere client on Windows 7.

    I then deploy the ovf from the vsphere client vm and it began to import my VM 30 GB.

    In the morning, I realized my vm with a vmdk file file has been transferred in the data store

    But I couldn't see any virtual computer.

    Now I don't know how to turn on this vmdk file that is in my data store

    @King_Robert

    It's a very bad habit of yours to just copy and paste content created by someone else, without even mentioning the source (http://www.mustbegeek.com/create-copy-of-existing-virtual-machine-in-esxi-server/)

    André

  • Block size is too small - reformat the data store or create files VMDk 2?

    Hello

    I'm new to VMware and so far I absolutely love everything about her!  Well, I was not too happy when I realized that I can't create more than 256 GB a drive without reformatting the VMFS datastore.   Apparently, I accepted the block size of 1 MB by default during the installation...   I now need to configure a file server with about 500 GB of storage.  The data store with a larger block size (4 MB would be fine for me) reformatting is easy enough?  I am currently working on the first host ESXi 4.1 and will move to set up 2 more hosts in the coming weeks.  I read somewhere on the forum that ESXi 4.0 doesn't let you change the block size by default of 1 MB - is this true or relevant to 4.1?

    Currently, I have only 2 small VMs on that host, and these can be easily saved and put offline for a few hours, if necessary.  When starting the server, I created a single RAID array that holds ESXi and all virtual machines - does that mean that I have to reinstall ESXi in order to increase the size of the block?

    An alternative that I can get to is to simply create 2 255 GB VMDK each and load distribution of storage in the guest operating system (Win 2008).   Performance wise, is a disc more great and better (or worse) than the two smaller disks?

    On a related note, what should I choose for the "Independent" option when you add a new virtual disk?  Default is disabled (not independent).

    Your thoughts, focus and expertise are welcome and will be greatly appreciated!

    Thanks in advance,

    Dothan

    Dothan,

    You should power (after the virtual machine backup) remove the current data store and create a new one with the new block size. I did not have this on ESXi 4.1 again, however, he worked on ESXi 4.0.

    The maximum size of VMDK. If you want ot be able to take pictures, make sure you subtract twice the size of block in GB of the documented maximum size.

    Block size 1 MB--> 254 GB (= 256 GB, 2 GB)

    Block size of 2 MB--> 508 GB (= 512 GB - 4 GB)

    Size of block 4 MB--> GB 1 016 (= 1 024 GB - 8 GB)

    Block size of 8 MB--> GB 2 032 (= 2 048 GB to 16 GB)

    André

    EDIT: Just the KB for the maximum sizes. http://KB.VMware.com/kb/1012384

  • How to move a rule definition with the data store for esx alarm without re-creating the rule?

    Hi, I would spend my rule of definition of alarm of the data at the level of esx store, without re-creating the rule. Is this possible?

    In the Act, I don't want no definition of the alarm at the data store level. So, what is the best practice?



    Best regards

    It is currently not possible.  The rules are hierarchical, and there is no way to stop the inheritance, or move messages.  You will have to re-create them at the appropriate level.

    -KjB

    VMware vExpert

  • How to set vm-description/notes and get the name of the data store, where the virtual machine

    Hello guys,.
    I have vCenter Orchestrator 4.1.1 build 733 installed and it works fine, but I need your help for the following two issues:
    (1) I want to put the description/notes of a virtual machine using a workflow. But I have not found any API useful to create this workflow (I don't want custom attributes, see attachment for details).

    (2) how can I get the name of the data store, where the virtual machine? I need this name for a workflow.
    I need your help.
    Thanks in advance!

    With regard to the notes of the VM, the following code (see enclosed package) can do this:

    var oldNotes = vm.summary.config.annotation;
    If (oldNotes == null) {oldNotes = ' ' ;}}
    System.log ("Notes of VM current:" + oldNotes);
    Now put the new notes:
    Start by creating a context
    Context of var = new VcVirtualMachineConfigSpec();
    Update the annotation with the new value property
    configSpec.annotation = notes;
    launch the task to reconfigure the virtual machine with the new context
    NOTE: This is sure to apply with a virtual machine under tension
    var task = vm.reconfigVM_Task (configSpec);

    And, in what concerns the VM information, take a look at the workflow of the library: \Library\vCenter\Virtual Machine management\Others\Extract virtual machine information

  • Machine virtual IOPS / s report, how to display the name of the data store?

    Hi guys

    I am new to the Foglight community, this is a great tool, and I learn a lot.

    Currently I am trying to create a simple table that will show me metric of my VMware environment: Virtual Machine name, Datastore IOPS and data store.

    However I can't find how to include data store name in the table, because it is not a measure of the Virtual Machine. I think I need to expand the scope of my table to include VMware Datastore, but I don't know how to do this.

    -Mark

    Check the options available it seems that it can be done with WCF (the frame behind the Foglight dashboards).  We recommend generally customers who plan to build views WCF take adequate training or our PSO people engaged in it.

    In any case I can help show a quick example of how it's done.

    Please try this on a local/test server.

    Go to Configuration > definition >

    Make sure that you are in my definition, and click the icon to add a view. then choose from the tables and trees - oriented line table

    Give a name to the view, go public and make a portlet/reportlet and apply

    Switch to the configuration tab and click the change for the lines and choose a query

    Under query, expand the VMware and scroll down

    Until you can select the query for virtual machines

    And press the set button.

    Your view should look like this

    Now you must select the columns.

    Each column has a value you can edit and there is a button + to add additional columns.

    Lets start with the name of the virtual machine - click on the button to change to your default column and choose the context.

    Click on the drop down menu to enter key and choose the current line (virtual vmware machine)

    Click on the drop down menu to access path and scroll down until you can select the name and then click on set.

    You have created a table that lists the names of all virtual machines.

    You can click on save. and then click test, choose a time and click the result. A new window will open a show the list you of virtual machines.
    From here you can continue to add additional columns, each time choosing the key entry in the current line and the path to the metric/string to display.

    For example, the name of the data store.
    I change the module

    Click the configuration tab and click the icon to add a column

    For the column value, that I chose defined context once again, the key input is the current row and for the path, I expand the node for the data store

    And scroll until I see the Proprietename

    If you save and test you will see the result

    Keep adding columns and the data you want, notice that you have arrows that allow you to control the order of the columns.

    Note that you can click Show advanced configuration properties

    This will give you the opportunity to see the properties of the extra table, such as header - giving you the opportunity to give a more meaningful name (name of the data store, the name of the virtual machine, etc.) to the column header.

    You can now go you drag and drop the table edge/report and under my eyes, you will see your new view

    Drop it in the main view

    I hope this has given you the starting point to build this table.

    As I said, I strongly recommend going through our WCF training if you plan build more custom views or hire software Dell PSO Organisation to help build you views that correspond to your need.

    Best regards

    Golan

  • ESXi 5.5 only 4 TB usable for the data store

    Hi all

    Although in vSphere Client 5,46 TB of capacity is visible, I can only use 3.64 TB to a data store.

    Configuration:

    -HP Proliant Microserver N40L

    -HP Smart Array P410/512 MB BBWC, FW 3.66 Controller

    -4 x 2 TB as a Raid5

    HP-ESXi - 5.5.0 - iso - 5.72 A

    I read a lot of articles on > limits of 2 TB of older versions of ESXi or controllers, limitation of 4 TB of c# vSphere Client, old firmware and esxi driver problems, etc.

    But I could not understand the problem.

    The P410 controller shows 5.5 TB:

    ~ # esxcli hpssacli cmd - q "controller connector = 1 see the config.

    Smart Array P410 in slot 1 (sn: PACCRID110510NR)

    Array A (SATA, used space: 0 MB)

    logicaldrive 1 (5.5 TB, RAID 5, OK)

    then 2I:0:5 (port 2I:box 0:bay 5, SATA, 2 TB, OK)

    then 2I:0:6 (port 2I:box 0:bay 6, SATA, 2 TB, OK)

    then 2I:0:7 (port 2I:box 0:bay 7, SATA, 2 TB, OK)

    then 2I:0:8 (port 2I:box 0:bay 8, SATA, 2 TB, OK)

    MS (Vendor ID PMCSIERA, model SRC 8x6G) 250 (WWID: 5001438013D9F37F)

    ESXi obviously knows the same size:

    ~ # Esxcli storage core devices list

    NAA.600508b1001ca5e68adc0bc4bc508d6e

    Full name: Serial Attached SCSI disk HP (naa.600508b1001ca5e68adc0bc4bc508d6e)

    Definable display name: true

       Size: 5723091

    Device type: Direct access

    Multichannel plugin: NMP

    Devfs Path: /vmfs/devices/disks/naa.600508b1001ca5e68adc0bc4bc508d6e

    Vendor: HP

    Model: VOLUME LOGIC

    Review: 3.66

    SCSI level: 5

    (etc.)

    But does that 4 TB:

    ~ # df h

    Size of filesystem used available use % mounted on

    VMFS-5 T 3.6 2.1 T 1. 5 t 58% / vmfs/volumes/mainstorage

    vfat 249.7 M 177.2 M 72.6 M 71% / vmfs/volumes/aeda2aeb-c934ad60-46ce-cb77abdfabaa

    vfat 249.7 M 175.9 M 73.9 M 70% / vmfs/volumes/c8ace7e9-a96a5914-16 d 4-a697ad4634d9

    vfat 285.8 M 191.4 M 94.4 M 67% / vmfs/volumes/55d7af6e-a25f7c97-c3ec-6805ca0de93c

    I tried to increase the size of the data store, last used area was 11720890367 and last sector was 11720890510 which doesn't really seem to translate into an increase of 1.8 to.

    Thanks for your help.

    DC

    Your output, it shows that there are two partitions vmfs

    3 10229760 3907029134 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

    4 3907031040 11720890367 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

    A partition with 1.8 to and another with 3.8 to.   That's the reason why you see only 3.8 to

    I don't know why the 1.8 to vmfs volume is not list anywhere (which does not appear in the list of file system of storage of esxcli also). Do you have any before partition created manually. If you think that if there is no data in this 1.8 to vmfs partition you can remove it by using the partedutil command and then use this free space to extend the existing 3.8 to. But do not forget before you delete the partitiaon

  • Dell virtual disk are larger. You want to increase the size of the data store.

    Hello

    I started the implementation of a server ESXi 5.5 Update 1 this week. I didn't know Dell shipped the server with two virtual disks instead of one. I realized this _apres_ that I had already created the data store and setup a few virtual machines to the breast. I called Dell who sent specific instructions to increase the removal of the second (empty) virtual disk and add it to the main. In the end, I increased the single VD from 2 TB to 3 TB and I want to give the remaining space in my store of data.

    I tried to follow the article here that explains how to do this via the CLI.

    Well, he did not altogether. Fortunately, I was able to recover my datastore my setting start and end sectors to their original numbers. But I'm still left with this almost 1 TB of space that I can not attribute to the data store. After that I reread storage adapters in the client, the new Dell disk size resulted under measurement devices. Click on "increase...". ", generates the following error which led me on the way to the CLI method:

    Call "HostDatastoreSystem.QueryAvailableDisksForVmfs" to object "ha-datastoresystem" on ESXi '[myservername]' failed.

    I will paste my notes that I took everything by jobs. Things have exploded the rails when I put 4 partition size to the largest size. Any help, please?

    ---

    I use that as a guide:

    http://KB.VMware.com/selfservice/search.do?cmd=displayKC & docType = kc & docTypeID = DT_KB_1_1 & externalId = 2002461


    1 use start hardware device management tools to increase the capacity of additional disk to the device. For more information, commit your hardware provider.

    This has been done. The new size of the virtual disk is 2791,88 GB (TB 2,79188)


    2. open a console to the ESXi host.

    Pretty simple.


    3. get the DeviceID for the data store to change.

    ~ # vmkfstools Pei "/ vmfs/volumes/datastore1 / '.

    System file VMFS-5, 60 extending on 1 partition.

    File system label (if applicable): datastore1

    Mode: public

    Capacity 1971926859776 (blocks of files 1880576 * 1048576), 1042688245760 (994385 blocks) prevail, max size of the 69201586814976 file

    UUID: 534e5121 - 4450-19dc-f8bc1238e18a 260d

    Partitions split (on 'lvm'):

    NAA.6c81f660ef0d23001ad809071096d28a:4


    A couple of things to note:

    a. the device for Datastore1 ID is: naa.6c81f660ef0d23001ad809071096d28a

    b. the number of Partition on the disk is: 4 ' [...]: 4 "»

    c. the prefix, "naa," means "Network address authority" the number immediately after is a single logical unit number.

    4. Enter the amount of disk space available on the data store.

    ~ # df h

    Size of filesystem used available use % mounted on

    VMFS-5 1. 8T 865.4 G 971,1 G 47% / vmfs/volumes/datastore1


    5 team of the device identifier, to identify the existing partitions on the device by using the partedUtil command.

    ~ # partedUtil get ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a '.

    364456 255 63 5854986240

    1 63 80324 222 0

    2 80325 8466884 6 0

    3 8466885 13709764 252 0

    4 13711360 3865468766 251 0

    ~ #


    According to the table in article KB

    4 13711360 3865468766 251 0 - primary #4, type 251 = 0xFB = VMFS, 13711360-3865468766 areas

    | |        |          |   |

    | |        |          |   \---attribut

    | |        |          \---type

    | |        \---se finishing sector

    | \---a starting from sector

    partition \---Numero


    Also note how the number of section start the old end sector number is + 1.


    6 identify the partitions that need to be resized and the size of the space to use.

    We want to resize partition 4. I don't really understand the last part of this sentence, however. Read more.


    7 the number of sector end you want for the target data store VMFS partitions. To use all out at the end of the disc space, remove 1 of the size of the disk in the areas as described in step 5 in order to get the last usable area.

    ESXi 5.x has a command to do this:

    ~ # partedUtil getUsableSectors "/ vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a".

    1 5854986239

    This means that we want 4 Partition of "naa.6c81f660ef0d23001ad809071096d28a" to be:

    13711360 - 5854986239 (i.e. the end of the disc)


    8 resize the partition containing the target VMFS Datastore using the command partedUtil, specifying the original existing partition and the desired end sector:

    Using the above information, our command is:

    resize # partedUtil ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a ' 4 13711360 5854986239


    9 step 8, the partedUtil command can report the warning:

    He did not. Displacement.


    10. the tables of partitions have been adjusted, but the VMFS data within the partition store is always the same size. Now there is an empty space in the partition where the VMFS data store can be grown.


    11 launch this v vmkfstools command to perform a refresh for VMFS volumes.

    Fact.


    12 reach the VMFS Datastore in the new space using the command - growfs vmkfstools, specifying the partition containing the VMFS Datastore target twice.

    vmkfstools - growfs ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a:4 ' ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a:4 '.


    It did not work. I got an error:

    / vmfs/volumes # vmkfstools - growfs ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a:4 ' /vmfs/devices/disks/naa.6c81f660ef0d «»

    23001ad809071096d28a:4 ".

    Cannot get device head way /dev/disks/naa.6c81f660ef0d23001ad809071096d28a:4 information


    Also the partition was very different to what I asked:

    ~ # partedUtil get ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a '.

    364456 255 63 5854986240

    1 63 80324 222 0

    2 80325 8466884 6 0

    3 8466885 13709764 252 0

    4 13711360 1560018942 251 0


    I fixed it by running these commands:

    ~ # partedUtil resize ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a ' 4 13711360 3865468766

    ~ # vmkfstools v

    ~ # partedUtil get ' / vmfs/devices/disks/naa.6c81f660ef0d23001ad809071096d28a '.

    364456 255 63 5854986240

    1 63 80324 222 0

    2 80325 8466884 6 0

    3 8466885 13709764 252 0

    4 13711360 3865468766 251 0

    Update:

    Since it was such a new machine, not in active production, we have safeguarded the VMs management off the ESXi host. Then flattened the virtual disk, recreated, and then created a store of data with the right size. (TPG this time, naturally.) We put the management of virtual machines on the data store. For Windows virtual machines, we have restored the using AppAssure. Everything is ok now.

    Need to add a new item to the list of punch: check what Dell has done the configuration of the virtual disks. :-)

Maybe you are looking for