Using the store of data on disk: what is the relationship?

So on some of my servers, I have a warning flag on the use of the data store on disk.  I have a data store that is 99.50 G, 80,43 provisioned and 19.07 free.  My question is what is the capacity: free space ratio?

From a performance point of view, I don't know, I would say it depends more of PAHO are / s that occur on your space of storage rather than on the actual place.

From a point of view provisioned thin / thick if you slim available on the total amount that you have available to you will need to check you can't close to exceed the space actually available.

Just a data store normal, is considered normally between 20-25% free take snapshots of the account being opened during backups / repetitions (or being left pending if someone has forgot about them!)

Hope this helps,

Dan

Tags: VMware

Similar Questions

  • Q re warning on a data store: use the store of data on disk

    We noticed a caveat on a data store:

    Warning
    Use of the data on disk store

    the ability for the data store is illustrated in the screenshot below

    We do not know how to solve this alarm.

    1-11-2011 1-50-19 PM.gif

    Thank you.

    ... 'Space put into service' greater than 'ability '.

    This is probably due to the thin provisioning or have pictures on disks.

    André

  • By default when using the relationships between properties?

    I'm trying to figure out if when using relationships between properties custom vCAC if there is a way to select a default value in the second property based on the selection in the first. For example, if I have a custom property, called environment with the selectable values such as stage of Dev, Test and then create one link on another well as CustSpec (to borrow a specific customization for this environment specification) can I add a default value for the CustSpec property without the user select this option? I created an XML file, as described in the documentation , and he applied a ValueExpression on the CustSpec property. It works fine if I manually select the CustSpec available in value based on the selection of the property of my environment, but it is not a value by default and for something with one option that the user doesn't care, I would that he simply apply this value to the property.

    Here is a short example of XML:

    <? XML version = "1.0" encoding = "utf-8" standalone = "yes"? >

    < ArrayOfPropertyValue xmlns: xsi =

    " http://www.w3.org/2001/XMLSchema-instance ">

    < Valeurpropriete >

    Environment of < FilterName > < / FilterName >

    Development of < FilterValue > < / FilterValue >

    < value > CustSpecName < / value >

    < / PropertyValue >

    < / ArrayOfPropertyValue >

    The relationship requires that you select the value of the child, even if there is only one option. There was a time in 5.2 with the web portal optional you could install, choosing the child if there is a present, but that has disappeared. What I would do, is use a vRO workflow in the State of the machine building and do set the value of CustSpec based on the property that animates the value of others and pass back it to the vRA. It's a fairly easy operation and a good opportunity to play with vRA-> vRO if you haven't done anything with it before.

    I wrote a blog on how to update a vRO property and pass it to vRA, there's a follow-up post that has an update more complex which allows you to have a generic template that updates the properties cloneFrom and cloneSpec based on what OS is selected.

    http://www.realize.NET/2014/10/Vcac-6-1-update-custom-properties-VCO/

  • Expansion of the store of data RAID 5

    Hi, I have a store of data on disks RAID 5 4 and I want to add more than 2 drives in RAID 5, is it possible to add the data store / RAID 5 arrary keeping the data store existing / (VMs) data intact?

    Thanks in advance!

    You must enter your PERC BIOS.

    Follow the manual of Dell.

    And (like writing above), perform a full backup before you start.

    André

  • The relationship "designed to" make visible in the Data Modeler?

    Hello

    I was wondering about the following topic:

    When I create (engineer), a table based on its entity, there is a relationship between the entity and the table.

    So to speak: a relationship "carried out by".

    This relationship can be demonstrated in Data Modeler?

    If so,.

    -can we use them to navigate a table to the entity and vice versa

    -can we maintain, in the case of design changes and changes to the execution table?

    I know that the relationship is available in the xml file that stores the table (Table > generatorID), but I am unable to see / use the relationship in the Data Modeler itself.

    Kind regards

    Art

    Hi Art,

    There is a section "Impact analysis" in the entity and table dialog boxes. Under the 'Mapping' node you will see mapped objects and can open the box from there.

    You cannot create or modify the mappings manually for now.

    Philippe

  • Channels for the identification of the records by using the syntax for Connect By

    Hello

    Can someone help with the following problem please?

    Our database records evaluations of the child for families in difficulty. When get us in touch with them, ideallu:
    A child receives a preliminary assessment (evaluation).
    If they are deemed to have need for additional support, they are given a second assessment (B) that is triggered by the assessment and an ID of the trigger to identify what assessment he comes.
    If they are deemed to need further support, they are given a third evaluation (C) that is triggered by the 2 assessment and an ID of the trigger to show that it comes from the b. assessment
    This is also true for a fourth assessment (assessment) report that is triggered by the evaluation C.

    However, due to the poor implementation of this concept by our provider database and the lack of knowledge by the workers, we have 2 problems:

    (1) analysis has isn't always the starting point as a worker can start any assessment at any time, i.e. from c assessment.

    (2) in view of this, a child can have several evaluations of the same type, i.e. a x 3 C, 2 x B assessment assessment in no particular order.

    The problem:

    I need to identify the separate strings (desired_output) of intervention using the relationship between the registration ID and the ID of the trigger, as shown in the table below:
    CHILD_ID RECORD_ID TRIGGER_ID ASM_NAME REC_START_DATE            REC_END_DATE              DESIRED_OUTPUT         
    -------- --------- ---------- -------- ------------------------- ------------------------- ---------------------- 
    A00001   R297931              B        18-JUN-10                 18-JUN-10                 1                      
    A00001   R299381   R297931    C        23-JUN-08                 23-JUN-08                 1                      
    A00001   R133219              A        12-AUG-08                 12-AUG-08                 2                      
    A00001   R240118              A        30-OCT-09                 30-OCT-09                 3                      
    A00001   R604913              A        17-AUG-12                 17-AUG-12                 4                      
    A00001   R604943   R604913    B        17-AUG-12                 17-AUG-12                 4                      
    A00001   R604961   R604943    C        17-AUG-12                 03-SEP-12                 4                      
    A00001   R605195              B        25-AUG-12                 25-AUG-12                 5                      
    A00001   R605214              A        28-AUG-12                 28-AUG-12                 6                      
    A00001   R609999   R604961    D        03-SEP-12                 05-SEP-12                 4                     
     
    Data:
    select * from
    (select * from
     
    (select 'A00001' as child_id, 'R297931' as record_id, null  as trigger_id, 'B' as asm_name, to_date('18-06-2010','dd/mm/yyyy') as rec_start_date, to_date('18-06-2010','dd/mm/yyyy') as rec_end_date, 1 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R299381' as record_id, 'R297931' as trigger_id, 'C' as asm_name, to_date('23-06-2008','dd/mm/yyyy') as rec_start_date, to_date('23-06-2008','dd/mm/yyyy') as rec_end_date, 1 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R133219' as record_id, null as trigger_id, 'A' as asm_name, to_date('12-08-2008','dd/mm/yyyy') as rec_start_date, to_date('12-08-2008','dd/mm/yyyy') as rec_end_date, 2 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R240118' as record_id, null as trigger_id, 'A' as asm_name, to_date('30-10-2009','dd/mm/yyyy') as rec_start_date, to_date('30-10-2009','dd/mm/yyyy') as rec_end_date, 3 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R604913' as record_id, null as trigger_id, 'A' as asm_name, to_date('17-08-2012','dd/mm/yyyy') as rec_start_date, to_date('17-08-2012','dd/mm/yyyy') as rec_end_date, 4 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R604943' as record_id, 'R604913' as trigger_id, 'B' as asm_name, to_date('17-08-2012','dd/mm/yyyy') as rec_start_date, to_date('17-08-2012','dd/mm/yyyy') as rec_end_date, 4 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R604961' as record_id, 'R604943' as trigger_id, 'C' as asm_name, to_date('17-08-2012','dd/mm/yyyy') as rec_start_date, to_date('03-09-2012','dd/mm/yyyy') as rec_end_date, 4 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R605195' as record_id, null as trigger_id, 'B' as asm_name, to_date('25-08-2012','dd/mm/yyyy') as rec_start_date, to_date('25-08-2012','dd/mm/yyyy') as rec_end_date, 5 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R605214' as record_id, null as trigger_id, 'A' as asm_name, to_date('28-08-2012','dd/mm/yyyy') as rec_start_date, to_date('28-08-2012','dd/mm/yyyy') as rec_end_date, 6 as desired_output from dual) union all
    (select 'A00001' as child_id, 'R609999' as record_id, 'R604961' as trigger_id, 'D' as asm_name, to_date('03-09-2012','dd/mm/yyyy') as rec_start_date, to_date('05-09-2012','dd/mm/yyyy') as rec_end_date, 4 as desired_output from dual)) child_records
    Originally, I thought to use Oracle Connect By syntax, but it does not (as far as I can work on!) because I have no start condition (the string of assessments can start A or B or C or D) which leads to duplication of lines.

    I thought I could use connect_by_root to group common assessments, but I am not convinced that this will give consistent results.

    -------------------------
    select
    child_records.*, 
    connect_by_root(nvl(trigger_id,record_id)) chain_id
    from child_records
    connect by trigger_id = prior record_id
    --------------------

    Is an alternative, possibly using trigger_id = above lag(record_id,1,null) (child_id order partition of...) but the assessments are in no particular order, I don't think I can specify a command clause...?

    Can anyone help to generate the desired output please?

    Thank you

    TP

    Hello

    Little Penguin says:
    ... However, due to the poor implementation of this concept by our provider database and the lack of knowledge by the workers, we have 2 problems:

    (1) analysis has isn't always the starting point as a worker can start any assessment at any time, i.e. from c assessment.

    (2) in view of this, a child can have several evaluations of the same type, i.e. a x 3 C, 2 x B assessment assessment in no particular order.

    This isn't necessarily a bad design. If it really fits your business rules is another matter. But as a means to represent events from cause to effect, to be used to CONNECT BY queries, it makes sense.

    The problem:

    I need to identify the separate strings (desired_output) of intervention using the relationship between the registration ID and the ID of the trigger, as shown in the table below:

    Let me assure you that I understand. You don't really have an desired_output column; you will need to that derived from other columns. Right?

    CHILD_ID RECORD_ID TRIGGER_ID ASM_NAME REC_START_DATE            REC_END_DATE              DESIRED_OUTPUT
    -------- --------- ---------- -------- ------------------------- ------------------------- ----------------------
    A00001   R297931              B        18-JUN-10                 18-JUN-10                 1
    A00001   R299381   R297931    C        23-JUN-08                 23-JUN-08                 1
    A00001   R133219              A        12-AUG-08                 12-AUG-08                 2
    A00001   R240118              A        30-OCT-09                 30-OCT-09                 3
    A00001   R604913              A        17-AUG-12                 17-AUG-12                 4
    A00001   R604943   R604913    B        17-AUG-12                 17-AUG-12                 4
    A00001   R604961   R604943    C        17-AUG-12                 03-SEP-12                 4
    A00001   R605195              B        25-AUG-12                 25-AUG-12                 5
    A00001   R605214              A        28-AUG-12                 28-AUG-12                 6
    A00001   R609999   R604961    D        03-SEP-12                 05-SEP-12                 4                     
    

    Data:...

    Thanks for the display of the data of the sample; that really helps.

    Originally, I thought to use Oracle Connect By syntax, but it does not (as far as I can work on!) because I have no start condition (the string of assessments can start A or B or C or D) which leads to duplication of lines.

    Is not

    START WITH  trigger_id  IS NULL
    

    identify a starting point? If something has not been triggered by something else, it is not a starting point? It is actually quite common in the hierarchical tables.

    I thought I could use connect_by_root to group common assessments, but I am not convinced that this will give consistent results.

    I'm not sure that you understand the problem. What do you mean by "consistent results? The doubt that you are worried, what exactly?

    -------------------------

    select
    child_records.*,
    connect_by_root(nvl(trigger_id,record_id)) chain_id
    from child_records
    connect by trigger_id = prior record_id
    

    You got right. If I understand what you mean by consistent results, it will bring them. You want to START to condition, of course, and as the starting lines will never have a trigger_id, there is no need to tell

    CONNECT_BY_ROOT  NVL (trigger_id, record_id)   AS chain_id
    

    You can simply say

    CONNECT_BY_ROOT  record_id   AS chain_id
    

    This will be particularly well idnetify the strings of their record_ids. Looks like you want to assign new sequence numbers (1, 2, 3,...) to identify the channels. Which takes an extra step.

    --------------------

    Is an alternative, possibly using trigger_id = above lag(record_id,1,null) (child_id order partition of...) but the assessments are in no particular order, I don't think I can specify a command clause...?

    Right; LAG depends on the order and order tells us nothing to this problem.
    In fact, feeds means so little to this problem that an event can come before the event that triggered it.
    For example, if I understand the first two lines of your output

    CHILD_ID RECORD_ID TRIGGER_ID ASM_NAME REC_START_DATE            REC_END_DATE              DESIRED_OUTPUT
    -------- --------- ---------- -------- ------------------------- ------------------------- ----------------------
    A00001   R297931              B        18-JUN-10                 18-JUN-10                 1
    A00001   R299381   R297931    C        23-JUN-08                 23-JUN-08                 1                      
    

    C event was triggered by the event B, even if C took place two years before B.
    (Not that it is important for the SQL problem, but can you explain the logic of how events can come before or after the events that triggered them?) "I'm just curious.)

    Here's a way you can assign sequential numbers to identify the channels:

    WITH     got_d_num     AS
    (
         SELECT     c.*
         ,     ROW_NUMBER () OVER ( PARTITION BY  child_id     -- Just guessing
                                   ORDER BY          NVL2 ( trigger_id
                                                  , 2     -- rows with trigger_ids come 2nd
                                       , 1     -- rows without come 1st
                                       )
                             ,             rec_start_date
                             ,             asm_name
                           )      AS d_num
         FROM    child_records  c
    )
    SELECT     child_id, record_id, trigger_id, asm_name, rec_start_date, rec_end_date
    ,     desired_output               -- if needed
    ,     CONNECT_BY_ROOT d_num     AS chain_num
    FROM     got_d_num
    START WITH     trigger_id     IS NULL
    CONNECT BY     trigger_id     = PRIOR record_id
    ORDER BY  child_id
    ,            rec_start_date
    ,       asm_name
    ;
    

    Output:

    `                                              DESIRED
    CHILD_ RECORD_ TRIGGER A REC_START REC_END_D   _OUTPUT  CHAIN_NUM
    ------ ------- ------- - --------- --------- --------- ----------
    A00001 R299381 R297931 C 23-JUN-08 23-JUN-08         1          3
    A00001 R133219         A 12-AUG-08 12-AUG-08         2          1
    A00001 R240118         A 30-OCT-09 30-OCT-09         3          2
    A00001 R297931         B 18-JUN-10 18-JUN-10         1          3
    A00001 R604913         A 17-AUG-12 17-AUG-12         4          4
    A00001 R604943 R604913 B 17-AUG-12 17-AUG-12         4          4
    A00001 R604961 R604943 C 17-AUG-12 03-SEP-12         4          4
    A00001 R605195         B 25-AUG-12 25-AUG-12         5          5
    A00001 R605214         A 28-AUG-12 28-AUG-12         6          6
    A00001 R609999 R604961 D 03-SEP-12 05-SEP-12         4          4
    

    This example uses rec_start to affect chain_num and also sort the output, but not to determine wht in a string. The first 3 events untrigered (in rec_start order) have been in August 2008, October 2009 and June 2010, while they were assigned chain_nums 1, 2 and 3, in that order. Antyhting that was triggered by them, directly or indirectly, gets the same chain_num if it happened before or after the starting point of the chain. Thus, the first line of the output, in June 2008, gets chain_num = 3. You have assigned desired_output = 1 on this line. If you can explain how you got the number 1, we can probably find a way to code, so that the calculated chain_num is identical to desired_num. In the above query, they are not the same, but they are related. Everywhere you specified desired_output = 1, the above query produces chain_num = 3. If the numbers are the same (for example desired_output = 4 = chain_num) it's just a coincidence.

    Note that when I used ROW_NUMBER, I did in a subquery, not in the main query where the CONNECT BY were made. Never use the analytical functions (for example, ROW_NUMBER) in the same query with CONNECT BY. The analytical functions are often the cause CONNECT BY conditions to be incorrectly evuated. I have never seen any literature on this subject and it doesn't always happen, but I suggest that you avoid to mix everything well.

    Published by: Frank Kulash, Sep 15, 2012 10:00

    Published by: Frank Kulash, Sep 15, 2012 10:44

    I just read the answer from John, who has a nice illustration of my last point: use of the separate petitions for both analytical and CONNECT BY. You can use the analytic function first, and then CONNECT BY, as I did, or you can do the first CONNECT BY and then use the analytic function (in a solution of John DENSE_RANK) later. Whatever it is, you must separate them.

  • If I use the SSD to initiate the operating system and a hard disk of data what exactly will be the benefit of this end?

    Hello

    I would like to know if I use the SSD drive for the operating system boot and a hard drive for data. What will be the benefit of this effect,

    I know that SSDS starts much more quickly and reads and writes data faster.

    When I use the SSD for OS then my only wil advantage so I have faster from the OS, not more than

    but the reading and writing data wil either by the hard drive so I wil not use all the benefits of the SSD, data wil be read and written by hard drive!

    Am I right or wrong?

    Johan

    SSD benefits for most of the time of loading, and it is for all software. If you install Windows on a SSD and return to a HARD drive after a few weeks you will certainly appreciate the value of an SSD, just for the

    only the loading.

    While startup in Windows is the most obvious, the Windows advantage is usually a bit snappier

    on an SSD.

    An SSD expected to also help in many programs such as Photoshop, video editing, the processing speeds

    or CAD, due to data being created during the writing of these programs.

    I think the reads and writes more visible with large files.

    All my programs are on the SSD, but my games are on the HARD disk.

    Games to load a large number of data and will load faster on an SSD, but it's still a few seconds.

    and as games use a lot of Go, it is not really profitable to keep (150 +) on an SSD.

    -J' I keep most of my data, except the music (to save space) in my user folder, but haven't really

    noticed a difference in speed when opening a data file, such as a Word, PDF, images, or similar

    'small' files from the HARD drive.
    -I wouldn't take an SSD less 120 GB. Try to run Windows on a 60GB translate too

    stuffing autour with management of data between the SSD and HDD. In particular, as it should be

    keep 20% an SSD free to allow the two "garbage collection" and deliver a load of

    cells of extra memory to extend the life of the DSS (data gets moved to empty cells when the cells die).

    There is also a noticeable difference between the speeds of SATA2 and SATA 3 controller, so if your motherboard uses the SATA2 more the advantage of an SSD is not as great as with SATA3.

    In addition, the new SATA3 SSD controllers are faster than those of two years ago.

    SSD is a big enough topic. There are many items in the wild that may be useful.

    .

  • Good way to use the concurrent data store

    Hello

    I'm developing a multithreaded C++ application that uses the C++ of Berkeley DB Library.

    In my case, I have several databases that I composed in a wooded area. It is important for me to use an environment because I need control over the cachesize parameter.

    I don't need no guarantee of transaction and have for most of the readings, so I decided to use the "simultaneous database.

    I first pre-fill all databases with a number of entries (configuration single-threaded phase) and then work on it at the same time (for most readings, but also insertions and deletions).

    I tried all kinds of different configurations, but I can't work without specifying DB_THREAD as a flag of the environment.

    I don't want to because then access all handles is synchronized sequentially according to the documentation:

    "... Note that the activation of this indicator will serialize calls to DB using the handle between the threads. If

    simultaneous scaling is important for your application, we recommend handles separate for each thread opening

    (and do not specify this indicator), rather than share handles between threads. "

    (Berkeley DB QAnywhere C++)

    So I tried to open the environment with the following indicators:

    DB_CREATE | DB_PRIVATE | DB_INIT_MPOOL | DB_INIT_CDB

    All data in this environment handles are open only with the DB_CREATE flag.

    So, since my understanding this same basic access handles need to be synchronized, I opened separate handles foreach database for each thread (opening the handles is still single-threaded).

    In my first approach, I have only made use of the object of global environment. Which does not work and gives the following during operations error message:

    DB_LOCK-> lock_put: Lock is no longer valid

    So I thought, since the same handle global env is passed to all handles separate DB, it is perhaps a race condition critical on the handful of approx.

    So in my next test, I opened also handles separate EPS in each thread (and db handles owned each).

    That does not produce an error in db, but now it seems that each thread sees its own version of the databases (I call it stat early in the life of each thread and he sees all of the empty dbs).

    What is the right way to use the concurrent data store? Each thread should really open his own set of db handles? What about the number of open handles env?

    PS: Without specifying that the DB_PRIVATE flag seems to do the job, but for performance reasons, I want all operations to perform in the cache and do not specify product DB_PRIVATE average of several writes to the disk for my scenario.

    Thanks a lot for your help.

    CD (simultaneous database) allows a single editor with multiple drives, access to the db at a given point in time.    The handle for the writer doesn't have to be shared with readers.   If you share the DB handle then calls are synchronized, but if each thread has its own handle DB then this is not the case.     Since you have an environment, DB_THREAD must be at the level of the environment.   This will allow the sharing of the environment handle.     This type of error "DB_LOCK-> lock_put: Lock is no longer valid" you can provide us your code so we can take a look.   Also what BDB version are you using?

  • Info satellite A210 171 what do I need before using the recovery disk?

    Hello

    I have 1 recovery disk that came with my equium, have not used before.
    It says on the back of the envelope of recovery disk 'insert the first diskette.

    I guess there should be a second?

    Also this info I need to provide during the installation process? as serial number?
    All I have a black and white sticker that is on the side of the box, it came in, but it was so long ago.

    I'm having a lot of problems with windows install does not and many others.
    Windows updates do not work either and I tried to correct those, but a clean reinstall of the operating system has been recommended.
    Now, at this point nothing will install because windows setup will not work.
    So I think I wil lhave to try a clean install, but I do know that the cd player works.

    I got a great run on this laptop, little slow today for my needs, but all very good product, purchased in 2007.
    With a clean OS and 4 GB of ram that could last for years, cannot afford a new right now.
    I really tried to avoid using a disc of recovery in the past, as I have never had problems until now.

    Thank you

    Post edited by: sounds

    > I have 1 recovery disk that came with my equium, have not used before.
    > On the back of the envelope of recovery disk, it says 'insert the first diskette.
    > I guess there should be a second?

    I m wondering why you get this disc with your laptop, why? Because I do not have the recovery disk, but I had to create one using the Toshiba recovery disc creator software
    But my laptop is more recent that your A210 might be the reason why you have this drive

    I have created a recovery disk and I needed two-disc DVD, I think it depends on the size of the image file

    > Also what information I need to provide during the installation process? as serial number?
    This is not necessary because your system has already been activated.

    But I recommend you backup your personal data, because the recovery disk would format the whole HARD drive and you will lose the files.

    PS: you should read other threads in product recovery forum!

  • When I try to use the Windows Update link for my XP computer I get a message indicating that the location where the Windows Update stores data has changed and it needs to be repaired. How can I solve this problem?

    When I try to use the Windows Update link for my XP computer and after using Windows Mr. Fix - It, I get a message indicating that the location where the Windows Update stores data has changed and must be repaired. How can I solve this problem?

    I'm not that computer literate and do not understand what needs to be fixed.

    This problem just started a few weeks when I noticed that I had any recent download automatic update that I regularly get. So I tried to do it manually through access via my control panel.

    I use ESET Antivirus Node32 software.

    Hello

    1. What is the error message or an exact error code?

    2 have you made changes on the computer before this problem?

    3. you try to check the updates?

    I would suggest trying the following methods and check if it helps.

    Method 1:

    Reset Windows Update components and then try to download the updates.

    How to reset the Windows Update components?

    http://support.Microsoft.com/kb/971058

    Warning: Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems can occur if you modify the registry incorrectly. Therefore, make sure that you proceed with caution. For added protection, back up the registry before you edit it. Then you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click on the number below to view the article in the Microsoft Knowledge Base: http://support.microsoft.com/kb/322756

     

    Method 2:

    File system scan tool checker and then try to press Ctrl + Alt + Delete and check.

    Description of Windows XP and Windows Server 2003 System File Checker (Sfc.exe):

    http://support.Microsoft.com/kb/310747

    Please respond with more information so that we could help you more.

  • How to make instant consolidation if the data store is out of disk space...

    Hello

    I have a virtual machine that has left many of the snapshot files and now can consolidate using the tool (he always said of disk space to make the consolidation)

    The data store have now 60 GB of free space when the space data store maximum disk is 850 GB and the busy VVMM over almost 700 GB of disk space...

    Can I attach a 3 TB USB drive to the ESXi server and move all the files through or another way to do...

    HELP ~ ~

    With the disk space required on drive E, it may be useful to consider an alternative:

    • create a (much smaller) another virtual disk
    • Stop all of the services entering drive E
    • Copy the data from drive E more to the new virtual disk (for example using robocopy)
    • switch drive letters
    • Once everything is working as expected, get rid of the old virtual disk

    This will not only reduce the disk space used on the data store, but also to ensure that the virtual disk cannot grow more.

    André

  • Creation of data store in ESXi 5 using the CLI

    Hi all

    I tried to create script to store data on a local drive to ESXi 5. I found lots of instructions for 4.1 whereby you use partedUtil to initialize the disk first and then use the vmkfstools command to create the vmfs volume and then use the command vmfs_create to vim - cmd to create the data store. However, version 5, I could only (virtually ghetto) where they don't use vmkfstools followed partedUtil and did not use vim - cmd vmfs_create. Someone can tell me if that's all that is required, or do I still need to use vmfs_create.

    Kind regards

    Vlad

    Vlad,

    You don't need to use vim - cmd vmfs_create option, the command vmkfstools appearing on my blog, which is used to create the VMFS volume, which in turn provides you with a VMFS data store. Vim - cmd did essentially the same thing if you do not use the vmkfstools command

  • Disadvantages of the default tablespace using to store data from the partitioned table?

    Can someone tell me, are there disadvantages, performance problems using default storage in oracle?

    I not create any tablespace during the creation of the database, but all the data partitioned in a tablespace named 'USERS' default... I will continue using the same tablespce...? I'm storing the data in the table where the growth of the table will be great... it can contain millions of records...? It will produce no degradation of performance? Suggest me on this...

    Different storage areas for administration and easier maintenance. In some cases for performance reasons so different disks are representative to the database (fast and not so fast, different levels of raid...)
    For example if you have several diagrams of database for different applications, you may want to separate schema objects to different storage spaces.
    Or, for example, you want to keep in database read-write tablespaces and just only read. Read-write tablespaces with the data files will be on very fast disks, read only the cheaper and perhaps more slowly (not required). Again separate tablespaces makes this very easy thing to do.
    Or you would like to keep separate indexes from tables and keep them in a different tablespace on the different mountpoint (disc). In this case probably better is to use ASM, but it's more than a reason to separate.

    And in your case-, it may be easier to manage if you create a new storage space for these new objects.
    For example:
    1 storage space for small tables
    1 storage space for small index
    1 storage space for large tables
    1 storage space for large index
    and so on. All depends on your particular architecture and database data growth and what you do with these data after a year, two years, three...

  • Why Firefox execute Javascript in the reference of the window / tab context what target = "_blank" is used with a uri: data?

    I put a link to _blank target. With the href attribute set to one with a data uri tag script, and context of the previous window runs the javascript code. This happens with the help of Window.Open ()? This would allow an attacker with XSS to steal data localStorage and apparently has no way for a developer to isolate the execution context of the window. My question is, why a uri: data run script in the previous window (ie the window with the target = "_blank" attribute set) context even if it opens in a new tab? And how a developer supposed to isolate framework Javascript window, since _blank not do?

    JavaScript allows access between windows (for example, using the window.opener property in the window 'child').

    What is the attack, you want to protect you against?

  • My gmail and Icloud accounts disappear from the list under "Mailbox store" in the mail. When I try to recreate my gmail account using the "new mailbox...» "I get the message"this account already exists. " What can I do to recover my mailboxes.

    I use YOSEMITE 10.10.5.

    My gmail and Icloud mailboxes (and several smart mailboxes) disappear from the list under "Mailbox store" in the mail. When I try to recreate my gmail account using the "new mailbox...» "(using the sign on the side down and to the left of the screen in Mail +) I get the message"this mailbox already exists. ». What can I do to recover my mailbox?

    Restart the Mac and Mail.

Maybe you are looking for

  • Intel Wifi Link 1000 bgn short range

    Hello, I have RMA had my Ideapad Y560 2 times and still have a problem, they stated that the Wifi has been tested and fully functional. Here's my problem. I can ONLY get connected wireless to my router/modem when I'm in the same room as the router. O

  • Error message when you install the printer driver

    First of all, let me tell you that I was searching and tried solutions for ten days, but can not solve the problem. My HP P1102w printer is three years old and is connected to my PC via a USB cable. It is not a shared printer. It worked correctly unt

  • HP Pavilion g6-2052sd

    My HP Pavilion 2052sd g6 does not start in Windows 8.1. I tried the caps reset with battery, and now it is stuck in the hp startup test. Can someone help me?

  • Display the text and the image on the camera's point of view using blackberry OS 5

    Hello I'm creating an application, which contains the label and the image on the camera's point of view. When I start the camera, that time itself the label should display on the top and the image should appear on background. I tried. but I got only

  • size of images loading

    Hallo. I'm still new to dream.If I have an image of 400 px original size x 250, I dublicate on same page and shrink to half its size, smaller will be the exit is also the file size a half the same stayes? Loading time is the issue. Are there disadvan