size of database vFoglight 6.7

We monitor the 2 vcenters with 2200 220 hosts and vm guests.  We do all this with a fms.  We have the cumulative value managed and it should purge historical data then 4 months.  Do not know what is happening.  How can I make sure that the purge is going as it should?  our database is 332gbs and it climbs...

Hi Chris - there was a bug in the kernel Foglight wouldn't remove old data, eg - if originally gave you your retention policy of vFoglight to keep everything forever, then a year later decided to change it to 4 months, the data between 4 months and a year would not be deleted - check on this artcle KB on how to fix... https://support.quest.com/SolutionDetail.aspx?ID=SOL52431&PR=Foglight&St=published

An easy way to check if the data has actually been removed is to use the zone at the top of the console and set it to a date more than 4 months, all graphics VM, dials etc should be greyed out if there is no data. The purge is actually over by the "Daily database maintenance" task, this should be listed in annexes Administration/manage. If it is disabled or missing no purging will take place.

Hope that helps - Danny

Tags: Dell Tech

Similar Questions

  • vCentre size of database server

    Hi all

    My database of the server (5.0) vCentre has increased by approximately 2 GB per day during the last few weeks.  He was initially on a local SQL Express Server, but he moved to full SQL once he reached the limit of 10 GB. It was a week ago – he is currently about 23 GB.

    I tried what is suggested in the following KBs, but still no change.

    http://KB.VMware.com/selfservice/microsites/search.do?cmd=displayKC & externalId = 1007453

    http://KB.VMware.com/selfservice/microsites/search.do?cmd=displayKC & externalId = 1025914

    I have a Setup small vm - 5 hosts, about 150 VM (servers and desktops).  The calculator to size of database in the vSphere client considers the size would be about 1 GB (granted it is only an indication) but it seems ridiculously large given the infrastructure.

    Level of statistics is ready to '1' and retention for the tasks and the newspaper is set to 15 (was previously 90).

    It is not SQL log files which consumes space (a my SQL Watch admin DB)

    In terms of additional modules - I have SRM, vShield, Update Manager and NetApp VSC - these were all already installed before the size of the database began more and more.

    No idea as to where I can look to see what happens and what is causing the database to grow so big?

    Thank you

    Tom

    It might help to understand what table fills with so much data. This KB to provide instructions on how to do it.

    Display the size of all tables on MS SQL server

    You opened an SR to have an engineer taking a peek?

  • Size of database

    Hi all

    We need to know the size of our database, we have below 2 queries

    1. Select sum(bytes/1024/1024/1024) 'Total size of database in the UK' in sys.sm$ ts_used;

    2 SELECT SUM (a.log_space + b.data_space + c.tempspace) "Total_DB_Size (G)"
    FROM (SELECT ROUND (SUM (BYTES/1024/1024/1024), 2) data_space)
    FROM dba_data_files) b.
    (SELECT ROUND (SUM (BYTES * Member/1024/1024/1024), 2) log_space)
    (LOG v$).
    (SELECT NVL (ROUND (SUM(BYTES/1024/1024/1024), 2), 0) tempspace)
    FROM dba_temp_files) c;

    1. the question of whether these requests are correct?

    2 should we consider temp tablespace also interviewed for the size of the database, or it should b omitted?

    3 only the first query result is correct?

    4. If none of above things are correct, please suggest me to get the size of database with the appropriate query

    DB: 10.2.0.4
    OS: RHEL 4.7

    Please suggest

    Thank you

    Hello

    Well, being simplistic (not taking into account, redologs, CF and archives):

    Database size (sum of data files):
    Select sum (MB) TotalDB)
    Select sum (bytes) / 1024/1024 MB of dba_data_files
    Union
    Select sum (bytes) / 1024/1024 MB of dba_temp_files
    );

    Used space:
    Select sum (MB) UsedMB)
    Select sum (bytes) / 1024/1024 MB of dba_data_files
    Union
    Select-1 * sum (bytes) / 1024/1024 MB from dba_free_space
    );

    Free space:
    Select sum (bytes) / 1024/1024 MB of dba_free_space;

    ARO
    Pinela.

  • What size of database can support ten times to the maximum?

    Masters.


    What size of database arguably ten times now? or just varies according to the size of the physical memory?

    I saw just ten times online docs, but I found no valuable index.


    could you please guide me in the right direction.

    Thank you

    As Jeremy mentioned, there is no built-in limit on the overall database size. It's really dependent on the material, mainly the memory capacity and the drive capacity/performance. We have at least a data store > 1 TB deployed in production, but encountered the most common size is in the 10s of GB range with a few in the 100s of the GB range.

    The main challenge with very large databases in memory are some of the management, before any startup/recovery operations, stop and, if you use replication, duplicate. These can take a long time to complete, even with good discs.

    For example, consider a 1 TB (1024 GB, 1 048 576 MB) data store running on a disk system RAID-0 that can maintain 100 MB/s sequential transfer rate (a midrange disk subsystem). In this case a clean boot (without recovery) will take at least 1048576 / 100 seconds = 10 486 seconds = 175 minutes = 2 hours and 55 minutes. for example, it will take nearly 3 hours to bring up the data store. If the recovery is necessary 9e.g. after a failure), then it will take more time. In a replicated environment a double operation speed may well be limited by the network (depending on the type of local network) and so a duplicate of this data store could easily take longer than 3 hours in certain circumstances (but using the--compress option and a high bandwidth LAN will help compensate for that).

    Chris

  • Determine the size of database

    Hi all

    I'll be really grateful if someone will help me please in determining the size of database.

    Concerning

    Select sum (bytes) of dba_data_files;
    Select sum (bytes) from dba_temp_files;
    Select sum (bytes) of the log v$.

  • BlackBerry smartphone how to increase the size of database - please help please please

    Hi all

    I'm new to the world of blackberry and am stuck with problems. The first is that the size of the database of my BB 9000 is 812.3 K, which I think is default. Now, it's full I get many emails on my phone. For this reason my call logs is deleted automatically, and all the data that I left in the call logs are for the last 24 hours only.

    I need help to understand if it is possible to change the size of the database, or if my emails can be downloaded on my media card that is 2 GB. If none of the above is possible then please tell me a solution I need to have for at least 1 week call logs.

    Help, please

    Jousset

    jousset wrote:
    Sorry for that database thing... I was wrong... Call log stores only 20 entries. any idea y than what is happening?

    Look at your Messages folder > Menu > view folder > Call Logs... now do you see more than the last 20 calls?

  • Size of database with ASM storage

    Hello

    We have the version of oracle RAC 11 g 2 database, Linux Server version 2.6. We have four databases, but all because she shares the same ASM storage.

    My question is: I want to know how to find the size of my database in ASM storage? What is this LINUX command to calculate the database size?

    Thank you in advance.

    1. connect as a user of GI.

    2 asmcmd lsdg.

  • size of database 11g

    Hi all
    first of all I would like thank you anyone here in this Ant for his time and support

    RAC 11g.2 Oracle ASM on linux

    Please be informed tomorrow I see the alertlog file contains the error that is related to the size of the tablespace
    cann't extend...

    so, we plan to increase storage life adding datafile but do sell this task not me
    and I found the below


    Database size used free space space
    -------------------- -------------------- --------------------
    127 GO GO 13 114 GO


    the seller adds more size, but it does not all of this size
    is this will destroy the performance due to
    Oracle, when writing data will be writing it in random
    then we say that we downloaded data in 1 m
    This will add in 4 datafile (tablespace x contains 4 datafile)
    and when userd to select the declaration will take more time?
    Please give me good link if you have on this issue
    and I will be appreciated.

    I don't think there will be no performance issues when you have several data files, because one of the major benefits of having multiple data files comes if you are able to put each data file on a separate drive Bay. This decreases the contention between disks.

  • sizes of database and journal vCenter (SQL 2005)

    I don't really have too much experience with architecture DB but I was watching our vCenter DB and log files. Both are about 14 GB.

    (1) we have about 30 hosts and VMs 500 or more. How the sizes compare to comparable environments.

    (2) we limit any of our balls. Is there a best practice document for Vcenetr DB configuration and normal maintenance. Seems that all the configuration is done automatically at installation.

    I'm not a DB admin either, but I do not know the size of the database is strongly influenced by what statistical level you use vCenter.   You can see that by entering Administration, the vCenter server and statistical parameters.   It will still give you a space estimated as you bump up the ladder.  You can also view this document, http://www.vmware.com/pdf/vsp_4_vcdb_sql2008.pdf .  Even if it is for SQL 2K 8 you can get a few tips from her.

    If you have found this device or any other useful post please consider the use of buttons useful/correct to award points

    Twitter: http://twitter.com/mittim12

  • Increase in size of database extremely - need to monitor the size of the table

    Hi all

    I need help in the growth of the follow-up tables. It's that the database grows enormously from 192 G at 200 over five days, representing 5% increase in database size. The general scenario is that is that the db is growing 20% annually, but this increase in a week will cause future hardware problems.

    Please advice to find the tables that is still growing.

    Thank you
    DBA Junr.

    Hi Georges s/n, do not know if this will help you now, but I did something like this on a daily basis for about 3 years. Literally, I have the size of each segment on a daily basis for this time and can go back in the graphic history of growth. If I had the question you ask the subject that I would be able to tell you exactly what segments are developed. If you need this on a smaller time frame, you can change the data stored in the column "created_date' to all the hours I guess.

    Anyway, here's the create statement.

    create table db_segment_history
    (
      db_segment_history_id     integer not null,
      owner                 varchar2(30),
      segment_name          varchar2(81),
      partition_name        varchar2(30),
      segment_type          varchar2(18),
      created_date          date default trunc(sysdate) not null,
      tablespace_name       varchar2(30),
      header_file           number,
      header_block          number,
      bytes                 number,
      blocks                number,
      block_size            number,
      extents               number,
      initial_extent        number,
      next_extent           number,
      min_extents           number,
      max_extents           number,
      pct_increase          number,
      freelists             number,
      freelist_groups       number,
      relative_fno          number,
      buffer_pool           varchar2(7)
    );
    
    create sequence db_segment_history_seq cache 1000;
    
    create unique index xpkdb_segment_history on db_segment_history( db_segment_history_id );
    
    alter table db_segment_history add ( constraint xpkdb_segment_history primary key ( db_segment_history_id )
    using index );
    
    create unique index ak1db_segment_history on db_segment_history(
         owner, segment_name, partition_name, segment_type, created_date );
    
    create trigger db_segment_history_bir
    before insert on db_segment_history
    for each row
    begin
         select     db_segment_history_seq.nextval
         into     :new.db_segment_history_id
         from     dual;
    end;
    /
    

    This is the insert / select that I use.

    insert into db_segment_history(
         owner, segment_name, created_date,
         partition_name, segment_type, tablespace_name,
         header_file, header_block, bytes,
         blocks, block_size, extents,
         initial_extent, next_extent, min_extents,
         max_extents, pct_increase, freelists,
         freelist_groups, relative_fno, buffer_pool )
    select  owner, segment_name, trunc( sysdate ),
         partition_name, segment_type, tablespace_name,
         header_file, header_block, bytes,
         blocks, bytes/blocks block_size, extents,
         initial_extent, next_extent, min_extents,
         max_extents, pct_increase, freelists,
         freelist_groups, relative_fno, buffer_pool
    from    dba_segments
    where   tablespace_name in(
              select tablespace_name from dba_tablespaces where contents = 'PERMANENT' );
    
    commit;
    

    I hope this helps.
    Michael Cunningham

    Published by: Michael C on February 21, 2012 14:19

  • CPU and RAM size of database

    How can we see what size of CPU and RAM for the database?
    <div class="jive-quote">select * from v$version</div>
    BANNER                                                           
    ---------------------------------------------------------------- 
    Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi 
    PL/SQL Release 10.2.0.5.0 - Production                           
    CORE     10.2.0.5.0     Production                                         
    TNS for HPUX: Version 10.2.0.5.0 - Production                    
    NLSRTL Version 10.2.0.5.0 - Production

    Controls CPU level & physical ram, you can find BONES
    ioscan | grep-i processor
    mpstat
    dmesg

    audit of the oracle

    SQL > show parameter sga_max_size

  • Calculation of the size of database

    Hello

    I gave a not sure DB of the size but its more than 50 or 100 GB, so how can I check the size of the entire database by using the query.
    I tried these
    "SELECT SUM (BYTES) / 1024/1024 MB" "FROM DBA_SEGMENTS;"
    give me 1 GB

    and
    Select sum (bytes / (1024 * 1024)) 'DB size in MB' from dba_data_files; gave me 1.6 GB then something is wrong.

    can anyone provide me with the exact query to get the full size of a database.

    Thank you

    >
    I have to run all three queries to get the total size of the database?
    >

    Yes, you must include the temporary tablespace and recovery logs and files.

    And you also need to include the archive generated logs that you can get from V$ ARCHIVED_LOG.

  • Check the size of database objects

    Hello

    How can I see the size of the tables and other objects in the database on the disk?

    Thank you!

    You can check it using this query.

    Select nom_segment, sum (bytes) / 1024/1024 dba_segments group by nom_segment;

    give the size in MB.

  • The file size of database Berkeley DB continues to grow

    We have three separate processes making inserts, updates, and deletes for a single file and we see the increase in size of file continuously until what we have exhausted all the space on our filesystem (10 GB).

    The BDB refrence guide says:
    Space released by removing the key/data pairs in a Btree database or Hash is never returned in the file system, even if it is reused when possible. This means that the databases Btree and hash are to develop alone. If enough key is deleted from a database which shrinks the underlying file is desirable, you must create a new database and copy the folders of the former into it.

    My understanding of the statement above is that BDB should reuse the pages for which the key/data pair was deleted. Evidence of why this is not the case in our case? The erase process that we have put in place runs in an infinite loop and we checked that the deletions are actually going through.

    Our keys are integers, generated in ascending consecutive order. We are using the C API.

    Thank you
    SB

    Published by: user1033737 on March 25, 2009 16:03

    Hey sb,.

    There is no specific maximum size which must be in the database before the beginning of page re-use. In addition, you need not to close and reopen the database handles in order to influence on the re-use of the page. BDB reuses pages when they are empty and does not have any type of key auto/page weighting (as it leads to dead ends). The factors that you should look at are the rates of insert, update and delete, if transactions are long service life, politics detection of blocking, etc. page fill factor and the size of the database page size.

    An example that might explain the behavior you see is one where the removal of the process / thread (s) gets 'end' to a page trying to delete on key found on this page. At the time when he acquires the lock for writing on this page, there is not more consecutive touches it, and at best the removal of the process/thread manages only to remove just one of these keys (other keys are those with higher values, because the processes/threads insert and update are very advanced), so the page will not be emptied so that it can be placed on the free list for reuse. If the insertion and update processes are very active, this can result in fast splits (or new page allocations), where sets of keys is flying on the new pages. In addition, the new inserted keys can land on pages already populated, thus making it harder for the erase process. If the new keys being inserted can be divided according to their values in already populated pages where there is room for them, then the pages of the free list, if any, will not be reused. Also note that we do not touch/page balancing.

    A result of such a scenario is a bottom of page fill factor (there are a number of key/data pairs in the many pages of the worksheet). You can monitor statistics database using 'db_stat d' to get information on the number of pages, pages on the free list, etc. fill factor page.
    Try to compact the database and see how it goes. In addition, if you have a simple unit test program showing this behavior, I may be able to better find the culprit.

    Best regards
    Andrei

  • Extract data from SQL to Pivot Table/CUBE database vFoglight?

    Our CIO request hourly data so he can create a history of our virtual environment using the data collected from vFoglight the BODs to attend so that they can see ho are we utulizing the new environment, they allowed us to buy. I've created a report for him but he won't have to copy and paste data in an excel file, every hour, it's short.

    It looks like this:

    dnsName Name Use (%)
    ESXi Server 1 Memory 42
    ESXi Server 2 Memory 37

    So is there a way to allow Excel to connect to SQL Server and extract this data so that it can organize itself? Or can we write a report that displays data on time as follows?

    I.E.

    01:00 - 42%

    02:00 - 39%

    03:00 - 41%

    Hi Morgan,.

    There are a few examples of the extraction of metrics data formatted in Foglight using command-line scripts in the blog article Foglight Reporting using queries metric or Groovy to http://en.community.dell.com/techcenter/performance-monitoring/foglight-administrators/w/admins-wiki/5654.foglight-reporting-using-metric-queries-or-groovy

    I hope this will give you some ideas.

    Kind regards

    Brian Wheeldon

Maybe you are looking for

  • When I turn off my IPad wifi also turns off after the installation of IOS10. What is the solution?

    When I turn off my IPad wifi also turns off after the installation of IOS10. What is the solution?

  • General issues (iFilter, .syf and EM simulation environment)

    Hello! I have a few general questions, ranging from corresponding EM simulation symbol creation and design of network environment. I would be really grateful if someone could take some time to respond to these in as much detail as possible. (1) I use

  • G500S touch

    G500S touch with Windows 8.1 The F1 to F12 keys have special multi media keys. By default, the operation of multiple media is primary and pressing 'Fn' F1, F2... F12 keys become operational. I would like to reverse this trend. I can't access the bios

  • ENVY of HP dv7 Notebook PC: Windows 10 compatibility issues

    I want to be the new operating system Windows 10. Apparently, this is going to be a compatibility issue with my laptop. I have the AMD Radeon HD 7640 G graphic card. It seems that Windows 10 do not like it. How can I fix this problem before the relea

  • Pavilion: printer driver vs software on hp laptop

    My new hp laptop has preinstalled hp eprint driver.  I have a wireless printer hp 5660 want on the network. I still have to install the printer on the computer software hp hp laptop. Is there a difference between the two or are they the same?  Could