converter shows wrong size for data warehouses

The size of data warehouses is not updated in the converter. I made some space on data warehouses but converter displays always old size. all ESX hosts show the right size, so that the converter look at... How do update you?

Nevermind, I reread the hosts and it shows correct now.

Moderators or OP, you can mark this thread as answered because it appears that OP has solved the problem.

Tags: VMware

Similar Questions

  • This show is phishing for data.

    This show is phishing for data...

    Hi Charles,

    Phishing (pronounced "fishing") is a type of online identity theft. It uses the e-mail and fraudulent Web sites designed to steal your personal data or information such as credit card numbers, passwords, data, account or other information.

    I suggest you try the steps from the following link:

    Scams by e-mail or web: how to protect yourself (this to do if you think you were scammed)
    http://www.Microsoft.com/security/online-privacy/phishing-scams.aspx

    Come back and let us know the State of the question, I'll be happy to help you. We, at tender Microsoft to excellence.

  • What are the best solutions for data warehouse configuration in 10 gr 2

    I need help on solutions to provide my client for the upgrade of the data warehouse.

    Current configuration: database Oracle 9.2.0.8. This database contains the data warehouse and a LOBs more on the same host. The sizes are respectively 6 terabyte (3 years retention policy + current year) and 1 terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. The current configuration is really poor.

    Customer cannot go for a major architectural or configuration to its environment changes existing now due to some constraints.

    However, they agreed to separate the databases on separate hosts of ETL tools and objects BO. We also plan to upgrade the database to 10 g 2 to achieve stability, better performance and overcome the current headache.
    We cannot upgrade the database 11g as the BO is a 6.5 version that is not compatible with Oracle 11 g. And the customer cannot afford to spend anything else than the database.

    So, my role is very essential in providing a perfect solution for better performance and take a migration successful Oracle from a host database to another (same platform and OS) in addition to the upgrade.

    I got to thinking now of the following:

    Move the database and Oracle mart for separating the host.
    The host will be the same platform, i.e. HP Superdome with 32-bit HP - UX operating system (we are unable to change to 64-bit as ETL tool does not support)
    Install the new Oracle 10 g on the new host database and move the data to it.
    Discover all the new features of 10gr 2 in order to data warehouse, i.e., introduction of Clause SQL TYPE, Parallel processing, partitioning, Data Pump, SPA to study the pre and post migration.
    Also at RAC to provide more better solution than our main motivation is to show an extraordinary performance improvement.
    I need your help to prepare a good roadmap for my assignment. Please suggest.

    Thank you

    Tapan

    Two major changes since the 9i to 10g which will impact performance are
    a. changes in the default values when executing GATHER_ % _STATS
    b. change in the behavior of the optimizer

    Oracle has published several white papers on them.

    Since 10g is no longer in the Premier Support, you'll have some trouble connecting SRs with Oracle Support - they will continue to ask you to use 11 g.

    The host will be the same platform, i.e. HP Superdome

    Why do you need export if you do not change the platforms? You could install 9.2.0.8 on the new server, the database clone and then level to 10 gr 2 / 11 GR 2.

    Hemant K Collette

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • Upgrade question for data warehouses - want a confirmation

    I have read in the upgrade guide and think I have the answer, but would like a confirmation e-mail.

    It's my current scenario:

    Three ESX 3.5 update 3 guests share a single fibre channel LUN and two iSCSI LUNS. The LUNS are all VMFS3 format.

    vCenter 4.0. (It was just upgraded from 2.5 to 4.0 today)

    I would add a fourth server from vSphere ESX 4.0 in the data center. I would like for this server to be able to share the same LUN VMFS3 as other ESX 3.5 hosts. Then I will move the VM from 3.5 servers to the servers of 4.0 one at a time, the upgrade of the VMTools on the virtual machine I. When I get enough VM on one of the servers of 3.5, I can pass this host to 4.0, and then keep the VM off the coast of 3.5 moving slowly until they are all on 4.0 hosts and guests 3.5 have been upgraded to 4.0.

    I want to confirm is that I can add a shared LUN of VMFS3 existing to the new host of 4.0 without the host 4.0 formatting or otherwise destroy the VMFS3 data store and the VM 75 who live on that.

    Looking at page 10 of the guide to upgrade, it says "no VMFS upgrade is required if you are upgrading an ESX3.x with VMFS3 data warehouses, then I guess it's ok. I guess that I didn't know, what if I shared a LUN exist with a new server, ESX4, whatever this scenario changes?

    Thanks for your help.

    I want to confirm is that I can add a shared LUN of VMFS3 existing to the new host without the host 4.0 4.0 formatting

    That's right, what ESX 4.0 is non-destructive, unless you explicitly tell it to erase / format partitions.

    If I shared a LUN with a new server, ESX4, change something about this scenario?

    I don't know exactly, but ESX version 3.21 of VMFS, 3.03 3.02 introduced 3.31 VMFS.  ESX 4.0 is as 3.41 (I THINK), but it does not matter they are ALL compatible, these are revisions minor and no need to reformat.

  • My page layout suddenly went south now, showing the wrong size for portrait and landscape

    The layout for FF6 changed abruptly of different dimensions for the layout and portrait screens. This does not happen on any other printable application or with other browsers.

    I went through the subject: systematic config and no there is no line item of same print.print_printer if ff still Paul, all wrong.

    The useful protrait is square and about 6 X 6 "and the landscape is the old portrait page. I can print portrait using only about 1/2 page then begins to another page.

    Went through printer settings [Brother MFC 822]-

    This does not affect any other program except Firefox.

    About: config, you don't have to reset all the settings of your printer? When you use the line filter, less is more (that is, if you simply filter by using print you will see more preferences to reset).

  • Design VSAN for data warehouses

    HI, I'm new with VSAN. I work for a small company, and whereas implementation of the VSAN on cluster 3 knots.

    I want to configure VSANS so that I can store the database and the production on faster disks (15 K) servers and other data on disks slow (7.2 K) is there a way I can achieve in VSAN? As I see it, I can only create a vsandatastore (created automatically) and assign the disks in disk groups, but no option to store vhdk on disk selected group or maybe create 2 vsandatastore (don't know it of possible or not) and configure the disk groups on them with faster disks and others with more slowly.

    Can someone please guide me here how can achieve us? any suggestion will be appreciated.

    Al-tough, it is possible to create a group with 1 x ssd and 3 x 600 GB 15 k rpm and another with 1 x ssd and 2 x 4 TB at 7200 RPM, I strongly advise you against it.

    There is (currently) no way to force your vm is to use the 'rapid' or groups 'slow' disks

    All disk groups are aggregated into a large data store.

    Currently, it is the best practices and recommended to use groups of equal records.

  • wrong size for mail UTL_SMTP

    Hello

    I use utl_smtp (demo_mail actually) to send mail, but I'm having a little trouble with the large emails. I use the below plsql block to test. After 32000 characters, I can't mail it gives ORA-06502: PL/SQL: digital error or value. I've also updated procedure mail to use utl_smtp.write_data to write into smaller strings, but it still gives the same error. I can't attach also any file which is more than 32000 characters.

    My version of the database is 10.2.0.4 EE platform and windows

    DECLARE
    Conn utl_smtp.connection;
    Str CLOB.
    BEGIN
    SELECT substr (Fld1, 1, 32001) INTO test_new WHERE the Chp2 str = 1;
    sys.demo_mail.mail (sender = > 'sender',)
    recipients = > 'recipient ',.
    subject = > "Salim."
    message = > str);
    END;

    Any help is appreciated...

    Tip:
    What is the maximum length for a varchar2 in pl/sql?

    -----------------
    Sybrand Bakker
    Senior Oracle DBA

  • Total latancy in data warehouses

    Hello

    My host has some datastores.one of them have high latency. What should I do?

    We know total latency = device katency + kernel latency if suppose us device or kernel is high, how should solve the problem?

    Thank you

    What I often find that alerts latency for data warehouses are related to CONSIDERING moves. Need large pieces of mobile data respond to a project. Are your warehouses of data showing high without cause latency? for long periods of time?

    I have the internal team of vcenter provide guidance on what they consider the latency for ps read and write ps and make these TOUGH KPI threshold in vcops. So when an alert occurs it is a matter of understanding unplanned planned vs, so it's a matter of drilling data with your experts.

    In my humble OPINION - be admin on vcops does not mean that you know the environment better than your teams... This means that you have useful data to share and then develop the best levels together and the SOP for what latency occurs and the steps to follow. We working through such things now... and test the adapter for Symmetrix. Looking is not on the HBA level so miss us some key data points, but we are working on the adapter to txt and other methods to get better data in vcops.

  • OBIEE retro-design to go from SQL Server to a data warehouse

    Hello
    I am new to modeling for data warehouses. We currently have an OBIEE environment put in place where the source of data was the transactional tables in SQL Server. The SQL Server data should be moved to a warehouse of data non-Oracle and I need to produce a logical data model for the people of my company warehouse. Unfortunately, the SQL Server data is never copied, so I based the model from the logical and physical of OBIEE diagram/relationships.
    My question is in what concerns the validity of the following relationships to be used in a data warehouse based on what is currently in OBIEE. When I have this model through Erwin, I wonder if I'm away from base in relationships (modeling, don't not personal):

    Dimension 1 has a 0:M with axis 2
    Dimension 1 has a 0:M with 3 Dimesion
    Dimension 2 has a 0:M with 3 Dimension
    Dimension 2 and 3 Dimension have a 0:M with 1 fact

    Through the use of pseudonyms and others, this works in OBIEE. Will this work as a model of data in a data warehouse environment?

    Thank you!

    I think that you started with the wrong foot. I suggest search you Google 'kimball methodology' and have a reading a few articles. Your DWH model should not be based on your transactional tables. You should ask your business users 'questions' they want to answer in the DWH. Then model your DWH base on it. You cannot model a DWH without knowing what questions you must answer. For example, if your users want to know the sales per day and per branch will be a fact of sales with a sales volume measure joining two dimension of branch and time dimensions. The number of facts will depend on the questions you need to answer, the data type, and the granularity of them.

  • RMAN duplication and expiration of the directives on the environment of data warehouse

    Operating systems: Windows Server 2008
    DB: Oracle 11 g 2

    Are there guidelines how often one should do a RMAN duplication and assign an expiration on archivelogs for data warehouse environments?
    It me appears once a day would be enough for overlapping in a data warehouse environment which is updated nightly. Expiration, I don't expect less than 1 week.
    See you soon!

    I agree with damorgan

    See the links for the best practices below.

    http://www.Oracle.com/technetwork/database/features/availability/311394-132335.PDF
    https://blogs.Oracle.com/Datawarehousing/entry/data_warehouse_in_archivelog_m

    Hope this helps,

    Concerning
    http://www.oracleracepxert.com
    Understand the power of the Oracle RMAN
    http://www.oracleracexpert.com/2011/10/understand-power-of-Oracle-RMAN.html
    Duplication of data base RAC with RMAN
    http://www.oracleracexpert.com/2009/12/duplicate-RAC-database-using-RMAN.html

  • PowerCLI script for DatastoreCluster, data warehouses and the size info, DataCenter, Clusters

    Hello - I am looking to remove the DatastoreClusters and then list data warehouses as well with their size (total size, used space, free space, put in service, uncommitted space) and the total number of virtual machines on this data store. I would also like to understand what data center and they are on clusters. Is this possible? I might want to limit what is displayed in data warehouses that are 13 percent of free space or less.

    Thank you

    LORRI

    Of course, try this way

    Get-Datastore.

    Select @{N = 'Center'; E={$_. Datacenter.Name}},

    @{N = "DSC"; E = {Get-DatastoreCluster - Datastore $_______ |} {{Select - ExpandProperty name}}.

    Name,CapacityGB,@{N='FreespaceGB'; E = {[math]: tour ($_.)} (FreespaceGB, 2)}},

    @{N = "ProvisionedSpaceGB"; E = {}

    [math]: Round (($_.)) ExtensionData.Summary.Capacity - $_. Extensiondata.Summary.FreeSpace + $_. ExtensionData.Summary.Uncommitted)/1GB,2)}}.

    @{N = "UnCommittedGB"; E = {[math]: tour ($_.)} ExtensionData.Summary.Uncommitted/1GB,2)}}.

    @{N = "VM"; E={$_. ExtensionData.VM.Count}} |

    Export Csv report.csv - NoTypeInformation - UseCulture

  • find the total size of the VM on specific data warehouses

    Hello

    PowerCLI guru, I'm not...

    I'm just using the following to get the total size of the virtual machine.

    Get-vmhost < host name, host name >. get - vm | Select-Object Name, UsedSpaceGB

    The problem is some of these VMS are on 15 k drive and some are over 10 k drive, is there a way to add the data store name each virtual machine is located at the exit. Seeing the data store name that I can know easily what typre of disc it is average.

    Currently, she is just out

    vmname size

    -----------        ------

    ABC 123

    Here you are:

    vmname datastore vmsize

    -----------     -------------   --------

    ABC xyz - 567 123

    Thank you

    The following PowerCLI script you will show the name, data warehouses and space used for all virtual machines:

    Get-VM | Select-Object -Property Name,
    @{Name="Datastores";Expression={
      [string]::Join(',',($_.DatastoreIDList |
        ForEach-Object { Get-View -id $_ |
        ForEach-Object {$_.Name}}) )
    }},
    UsedSpaceGB
    

    Best regards, Robert

  • Storage vmotion, between data warehouses when the block size is different

    Hi all

    I would like to know, is it possible to do a vmotion of storage between two data warehouses, when they have different block sizes. The case is as a former vmfs data store 3 which has been upgraded direct to vmfs5 there, but since it has been updated it maintains its own block size. While the newly created vmfs datastore 5 a block size of 1 MB.

    Thus, storage Vmotion will work in this case? It will fail OR will be with degraded performance?

    Finally, storage vmotion is possible even if you do not have a cluster DRS for storage?

    Thank you

    Yes you can!

    Check some info on the effects of block size: http://www.yellow-bricks.com/2011/02/18/blocksize-impact/

  • Best practices for creating data warehouses

    I have 10 iSCSI LUN (all on the same device), each 1.8 size to which I want to introduce ESXi to create data warehouses.  Are there all recommended for how I divide these LUNS to the top, or should I just do a giant data store?  Maybe there are performance factors to consider here?

    If I had to do 10 1.8 data stores to I can see a problem on the road when I need to expand a vmdk but cannot because there is not enough free space on the data store, it would be less of a problem if I had a giant data store at first.

    Thank you.

    First of all, it's one of those type questions "how long is a piece of string.

    It depends of course of the VMDK number you're going to be running, the available storage, the type of storage, IO, storage, type of virtual machines etc. etc. etc.,

    Things to consider are for example, you have a storage that deduplication and storage cost a major factor (and so on)
    Of course. . almost always, a cost reduction is equivalent to a drop in performance.

    In any case, a rule very loose, I have (in most cases), among which I size LUN somewhere between 400 and 750 GB and rarely (if ever) have more than 30 VMDK per LUN.

    Almost always, redirect the request to the following resources:

    first of all, the maximum rates of configuration:
    http://www.VMware.com/PDF/vSphere4/R40/vsp_40_config_max.PDF

    http://www.gabesvirtualworld.com/?p=68
    http://SearchVMware.TechTarget.com/Tip/0, 289483, sid179_gci1350469, 00.html
    http://communities.VMware.com/thread/104211
    http://communities.VMware.com/thread/238199
    http://www.yellow-bricks.com/2009/06/23/vmfslun-size/

    (although Post-andre above covers most of them)

Maybe you are looking for

  • After BT stack installation smartphone does not find 'BT ActiveSync' service

    After that I updated the new BT 4.20.1, my smartphone stack not found service 'BT ActiveSync' on my laptop.It cannot find "Serial Port". Here is my equipment.Laptop: Portege R100OS: Windows XP Professional.Smartphone: Dodpod S300Battery Bluetooth: 4.

  • Problem of Satellite A200 - 28 p/battery charger

    Hello I had a power problem with my battery charger for a while sometimes it would charge when plugged in and others would not, so if I removed the charger to move the laptop it was a bet of 50 / 50 as to whether it would be instantly turn off or not

  • Error code: OX80072EFF (cannot install updates)

    can you please help with that? Thank you

  • LY suddenly can't receive emails.

    I can send. Sent myself some test messages and would not receive them.

  • Nikon support

    I bought a camera Nikon D7200 and was looking for a specific patch, download or fix it to be able to read the NEF format in Photoshop.I understand that this is a different difficulty for each camera model. Again, it is not for any Nikon, but for mode