Total latancy in data warehouses

Hello

My host has some datastores.one of them have high latency. What should I do?

We know total latency = device katency + kernel latency if suppose us device or kernel is high, how should solve the problem?

Thank you

What I often find that alerts latency for data warehouses are related to CONSIDERING moves. Need large pieces of mobile data respond to a project. Are your warehouses of data showing high without cause latency? for long periods of time?

I have the internal team of vcenter provide guidance on what they consider the latency for ps read and write ps and make these TOUGH KPI threshold in vcops. So when an alert occurs it is a matter of understanding unplanned planned vs, so it's a matter of drilling data with your experts.

In my humble OPINION - be admin on vcops does not mean that you know the environment better than your teams... This means that you have useful data to share and then develop the best levels together and the SOP for what latency occurs and the steps to follow. We working through such things now... and test the adapter for Symmetrix. Looking is not on the HBA level so miss us some key data points, but we are working on the adapter to txt and other methods to get better data in vcops.

Tags: VMware

Similar Questions

  • find the total size of the VM on specific data warehouses

    Hello

    PowerCLI guru, I'm not...

    I'm just using the following to get the total size of the virtual machine.

    Get-vmhost < host name, host name >. get - vm | Select-Object Name, UsedSpaceGB

    The problem is some of these VMS are on 15 k drive and some are over 10 k drive, is there a way to add the data store name each virtual machine is located at the exit. Seeing the data store name that I can know easily what typre of disc it is average.

    Currently, she is just out

    vmname size

    -----------        ------

    ABC 123

    Here you are:

    vmname datastore vmsize

    -----------     -------------   --------

    ABC xyz - 567 123

    Thank you

    The following PowerCLI script you will show the name, data warehouses and space used for all virtual machines:

    Get-VM | Select-Object -Property Name,
    @{Name="Datastores";Expression={
      [string]::Join(',',($_.DatastoreIDList |
        ForEach-Object { Get-View -id $_ |
        ForEach-Object {$_.Name}}) )
    }},
    UsedSpaceGB
    

    Best regards, Robert

  • PowerCLI script for DatastoreCluster, data warehouses and the size info, DataCenter, Clusters

    Hello - I am looking to remove the DatastoreClusters and then list data warehouses as well with their size (total size, used space, free space, put in service, uncommitted space) and the total number of virtual machines on this data store. I would also like to understand what data center and they are on clusters. Is this possible? I might want to limit what is displayed in data warehouses that are 13 percent of free space or less.

    Thank you

    LORRI

    Of course, try this way

    Get-Datastore.

    Select @{N = 'Center'; E={$_. Datacenter.Name}},

    @{N = "DSC"; E = {Get-DatastoreCluster - Datastore $_______ |} {{Select - ExpandProperty name}}.

    Name,CapacityGB,@{N='FreespaceGB'; E = {[math]: tour ($_.)} (FreespaceGB, 2)}},

    @{N = "ProvisionedSpaceGB"; E = {}

    [math]: Round (($_.)) ExtensionData.Summary.Capacity - $_. Extensiondata.Summary.FreeSpace + $_. ExtensionData.Summary.Uncommitted)/1GB,2)}}.

    @{N = "UnCommittedGB"; E = {[math]: tour ($_.)} ExtensionData.Summary.Uncommitted/1GB,2)}}.

    @{N = "VM"; E={$_. ExtensionData.VM.Count}} |

    Export Csv report.csv - NoTypeInformation - UseCulture

  • How to see 5 pools do not VM disks on data warehouses?

    Hello.  I am new to view.  I just started a new job where they have a 5 view with more than 1,000 Linked Clone VM VDI environment.

    There are warehouses of data 3fc which each have about 400 GB of free space by showing in the view tab in vCenter server administrator.  Asked me to create a new pool with 15 virtual machines.  Each VM will require about 35 GB of total disk space (disposable, internal, swap disk and vm).  Therefore, approximately 525 GB of disk space for the virtual machines.

    I know that the amount of disk space required for the virtual machines is too big to fit on one of the data stores.  See split sort virtual machines in three warehouses of data?  If so, how composer decides what data stores to put the virtual disks of each virtual computer on?

    Thanks for your help!

    It has to work this way, you have selected all three data warehouses when you configure the pool.   If this isn't the case, you will need to go back and change the settings of the pool to include data warehouses before starting commissioning.

  • Get-stat - disk for virtual machines on NFS data warehouses

    Hi all

    Through work for VMs on NFS data warehouses get-stat-disc?

    $myVM | Get-stat - disc

    Doesn't seem to work for VMs on NFS data warehouses, but that works for VMs on VMFS data warehouses.

    After a presentation of VMware to http://webcache.googleusercontent.com/search?q=cache:h78Db7LqHcwJ:www.slideshare.net/vmwarecarter/powercli-workshop+%2Bget-stat+%2Bnfs & cd = 2 & hl = in & ct = Europeans & gl = at the & source = www.google.com.au

    «WARNING: NFS performance statistics are not available (to come in a future version of vSphere).»

    When these statistics are available for NFS data storage?

    Kind regards

    marc0

    The answer is in the property of instance data that Get-Stat returns.

    (1) get-Stat ==> canonical name of the LUN on which disk the hard

    (2) get-Stat virtualdisk ==> The SCSI id of the virtual disk inside the VM

    (3) get-Stat data store ==> the name of the data store

    (1) you give statistics for view virtual machine i/o activity starting from the LUN. For a VM with several virtual disks on the same data store, this will display the total IO statistics. And it will also include i/o generated by another VM on the LUN as swap, Flash files related...

    (2) gives statistics for 1 specific virtual disk of your virtual machine

    (3) statistics of e/s of your VM to a specific data store. Interesting when you have a store of data with multiple extensions (multiple LUNS)

    I hope that clarifies it a bit.

  • Host does not see some data warehouses

    I add a new host in a cluster of ESX 4.  ISCSI Storage Adapter on all hosts (old and new) detects all available Lun, 10 in total.  However, 2 of the LUNS appear not in the list of data stores in the new host.  Thus, guests see 10 stores of data.  Guests are running live virtual machines that reside on the 2 "missing" data warehouses.  The new host sees only 8 of the 10 stores of data.

    I see the text "detected can be a snapshot" and "not open device"4c19dd2e-f64dfcc4-f94f-a4badb4fcdb6"to probe: no such target on the map.

    "in/var/log/vmkernel.  What to do with these averages of message?

    4126) LVM: 7165: found device naa.60a98000572d4374433450444335486d:1 can get to be a snapshot:

    18 June-15:47:28 corpesx2 vmkernel: 0:00:03:39.125 cpu16:4126) LVM: 7172: questioned disk ID:

    18 June-15:47:28 corpesx2 vmkernel: 0:00:03:39.226 cpu16:4126) Vol3: 1488: could not open device "4c19dd2e-f64dfcc4-f94f-a4badb4fcdb6" to probe: no such target on the map

    18 June-15:47:28 corpesx2 vmkernel: 0:00:03:39.226 cpu16:4126) Vol3: 608: could not open device "4c19dd2e-f64dfcc4-f94f-a4badb4fcdb6" open volume: no such target on the map

    18 June-15:47:28 corpesx2 vmkernel: 0:00:03:39.226 cpu16:4126) SFS: 3702: pilot no. FS claimed device '4c19dd2e-f64dfcc4-f94f-a4badb4fcdb6': unsupported

    June 18 at 15:47:30 corpesx2 vmkernel: 0:00:03:40.594 cpu2:4098) NMP: nmp_CompleteCommandForPath: command 0 x 12 (0x410007114b80) for device NMP 'mpx.vmhba1:C0:T0:L0' failed on the physical path 'vmhba1:C0:T0:L0' H:0 x 0 D:0 x 2 P:0 x 0 valid sense data: 0 x 5 0 x 24 0 x 0.

    June 18 at 15:47:30 corpesx2 vmkernel: 0:00:03:40.594 cpu2:4098) ScsiDeviceIO: 747: command 0 x 12 for the device "mpx.vmhba1:C0:T0:L0" has no sense H:0 x 0 D:0 x 2 P:0 0 valid x-data: 0 x 5 0 24 x 0 x 0.

    I see the text "detected to be a snapshot" and "Could not open device.

    "4c19dd2e-f64dfcc4-f94f-a4badb4fcdb6" to probe: no such target on

    adapter

    "in/var/log/vmkernel.  What to do with these averages of message?

    It could be due to various reasons:

    1. lost on the host partition table.  Check with fdisk-lu < devname >

    2 LUNS can be detected as snapshot.  -> esxcfg-volumes - l should show snapshot volumes.

    esxcfg-volumes - M < Datastore UUID >

    This will force up the volume.

    vmkfstools v

    to refresh the storage.

    then if all goes well, you should be good to go.

    3. corruption of LUN. -> Please contact VMware support or support from the SAN vendor to fix it.

    Please consider awarding point if this information is useful.

  • ESX host managed by VirtualCenter servers all see the same SAN LUNS / data warehouses

    Scenario of

    I have two separate deployments of VirtualCenter (in the same physical location) VCA and VCB with each deployment management 5 x ESX hosts (10 in total).

    All 10 hosts are allowed to look at the same SAN LUNS / data warehouses?

    Or I have to isolate five VCA hosts and five guests to their own SAN LUNS VCB / data warehouses?

    Box293 wrote:

    Basically, we have our direct environment and we are on the beta program for VI4.

    I want to turn off a virtual machine in the direct environment and then pull it out of the inventory.

    Then I want to go the VI4 Beta and browse the data store and add this virtual machine to inventory.

    These two environments may see the same LUNS / data warehouses.

    I did and it works however I want to make sure that it is an approved practice.

    You can do that normally without any problems. Even if the virtual machine is in a different datacenter inventory and there Powred-Off. You can map the logic unit number on the VI4 and add to the inventory of the VI4 another host.

    Recently, I did this practice to migrate from host with virtual machines in a different subnet to the host who knows the same setting SAN LUN in the other subnet.

    Best regards

    Hussain Al Sayed

  • How to check the total of the data sent and received via Web Service

    Hi guys

    I develop an application that receives data on about from server using the Web Service. Any fast way I can find total data sent to the server and the total of the data received from the server by my application?

    Any help would be much appreciated.

    Pinsard

    Check this thread:

    http://supportforums.BlackBerry.com/T5/Java-development/calculate-data-usage-for-particular-applicat...

    It could be that useful...

  • Oracle Business Intelligence Data Warehouse Administration Console 11 g and Informatica PowerCenter and Guide of Installation PowerConnect adapters 9.6.1 for Linux x 86 (64-bit)

    Hi all

    I'm looking for full installation GUIDE for Oracle Business Intelligence Data Warehouse Console Administration 11 g and Informatica PowerCenter and PowerConnect 9.6.1 for Linux x 86 (64 bit) adapters. I just wonder is there any url that you can recommend for installation. Please advise.

    Looks like these are ask you.

    http://docs.Oracle.com/CD/E25054_01/fusionapps.1111/e16814/postinstsetup.htm

    http://ashrutp.blogspot.com.by/2014/01/Informatica-PowerCenter-and.html

    Informatica PowerCenter 9 Installation and Configuration Guide complete | Training of Informatica & tutorials

  • Need ideas for compare current Oracle EBS data against the data warehouse to establish data matches course.

    Hello, I am new to the Oracle Forum. I'm a BI developer and I need to compare data Oracle EBS in my organization with the data in the data warehouse to make sure they match. I am using Informatica for this process by pulling the two sources and comparing. Can someone give me a brief example to make this process or similar methods with Informatica and its transformations so that it can be useful. Thanks in advance. Let me know if you need more information about the process.

    Looks like you are trying to make a reconciliation process? That is you can have implemented BIAPPS (or something custom) and now I want to check your ETL? If this is the case then it's good enough for a test case - we usually start with senior level (actual numbers for each group of companies, for example), then a subset of other different queries for example as per a level in the hierarchy of the org, by position, dates etc.

    and much more expensive as the implement of OLIVIER

    I don't think there are many in the world that is more expensive than the implementation of OLIVIER!

  • Do we need data warehouse, if we only create dashboards and reports in obiee?

    Hello! I'm new to obiee.

    My organization has decided to build their reports and dashboards using obiee. I am involved in this mission, but I don't have in-depth knowledge he obiee.  My question is what do we need to have the installation of the data warehouse? Or I just need to just install obiee by the creation of a repository, and then by creating a data source in publisher bi and then create dashboards or reports?

    I'm confused too please help me in this regard. Please share any document or link where I can easily understand these things. Thank you

    Please share any document or link where I can easily understand these things. Thank you

    OBIEE is a software to run without a good understanding of its complex concepts. I would really recommend attending a training course, or at least a book (for example this or this). There are MANY items of general blog on OBIEE, many of which are of poor quality and are all step-by-step guides on how to do a particular task, without explaining the overall situation.

    If you want to use OBIEE and to make it a success, you have learned to understand the basics.

    To answer your question directly:

    -BI Publisher is not the same thing as OBIEE. It is a component of it (but also autonomous available). OBIEE makes data accessible through 'Dashboards' which is made up of 'Analysis', written in the answers tool. Dashboards can also contain content BI Publisher if you want

    -OBIEE can report against the many sources of different data, one or more data warehouse and transactional. Most of the OBIEE implementations that perform well are based against dedicated DW, but is not a mandatory condition.

    -If reports against a DW real or not, when you build the repository OBIEE you build a "virtual" data warehouse, in other words, you dimensionally model all your business in one data set of logic diagrams in Star.

  • Create schemas for data warehouse for Reporting with Oracle XE

    It is possible to import home charger with the Oracle XE database and dw?

    I get this error ORA-00439: feature not enabled not: progress of replication in the log of the CIM.

    I saw that the database, we do not have have the feature "advanced replication."

    SQL> select * from v $ option where parameter = 'Advanced replication';
    
    
    PARAMETER
    -------------------------------------------------- --------------
    VALUE
    -------------------------------------------------- --------------
    advanced replication
    FALSE
    
    
    

    Journal of the CIM:

    Mon Feb 23 14:16 BRT 2015 1424711760686 atg.cim.database.dbsetup.CimDBJobManager module of high level of information for Reporting data warehouse Datasource list page: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:05 BRT 2015 1424711765012 atg.cim.database.dbsetup.CimDBJobManager 0 0 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765192 atg.cim.database.dbsetup.CimDBJobManager list of module level for Datasource Reporting charger: DafEar.Admin, DCS. DW, DCS. PublishingAgent, ARF.base, Store.EStore, Store.EStore.International

    Info my Feb 23 14:16:05 BRT 2015 1424711765733 atg.cim.database.dbsetup.CimDBJobManager 1 1 imports has not been executed.

    Info my Feb 23 14:16:05 BRT 2015 1424711765953 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Publishing module: DCS - UI. Versioned, BIZUI, PubPortlet, DafEar.admin, DCS - UI. SiteAdmin.Versioned, SiteAdmin.Versioned, DCS. Versioned, DCS - UI, Store.EStore.Versioned, Store.Storefront, DAF. Endeca.Index.Versioned, DCS. Endeca.Index.Versioned, ARF.base, DCS. Endeca.Index.SKUIndexing, Store.EStore.International.Versioned, Store.Mobile, Store.Mobile.Versioned, Store.Endeca.International, Store.KnowledgeBase.International, Portal.paf, Store.Storefront

    Info my Feb 23 14:16:11 BRT 2015 1424711771561 atg.cim.database.dbsetup.CimDBJobManager 65 65 imports has not been executed.

    Info my Feb 23 14:16:11 BRT 2015 1424711771722 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Production Core module: Store.EStore.International, DafEar.Admin, DPS, DSS, DCS. PublishingAgent, DCS. AbandonedOrderServices, DAF. Endeca.Index, DCS. Endeca.Index, Store.Endeca.Index, DAF. Endeca.Assembler, ARF.base, PublishingAgent, DCS. Endeca.Index.SKUIndexing,Store.Storefront,Store.EStore.International,Store.Recommendations,Store.Mobile,Store.Endeca.International,Store.Fluoroscope,Store.KnowledgeBase.International,Store.Mobile.Recommendations,Store.Mobile.International,Store.EStore,Store.Recommendations.International

    Info my Feb 23 14:16:12 1424711772473 2015 BRT atg.cim.database.dbsetup.CimDBJobManager 30 30 imports has not been executed.

    Info my Feb 23 14:16:19 BRT 2015 1424711779573 atg.cim.database.dbsetup.CimDBJobManager creating schema to the Reporting data warehouse data source

    Info my Feb 23 14:16:19 BRT 2015 1424711779653 atg.cim.database.dbsetup.CimDBJobManager list of top level for Datasource Reporting data warehouse module: domain controllers. DW, ARF. DW.base, ARF. DW. InternalUsers, Store.Storefront

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager DatabaseTask Create for Module ARF. DW.base, sql/db_components/oracle/arf_init.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_ddl.sql

    Info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_view_ddl.sql * info my Feb 23 14:16:19 BRT 2015 1424711779993 atg.cim.database.dbsetup.CimDBJobManager Create DatabaseTask Module domain controller. DW, sql/db_components/oracle/arf_dcs_init.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager BRT 2015 1424711781085 found 2 of the 6 unrun previously of tasks for Datasource Reporting data warehouse

    Info my Feb 23 14:16:21 BRT 2015 1424711781085 atg.cim.database.dbsetup.CimDBJobManager 1 ARF. DW.base: sql/db_components/oracle/arf_view_ddl.sql

    Info my Feb 23 14:16:21 atg.cim.database.dbsetup.CimDBJobManager 1424711781085 2015 BRT 2 domain controllers. DW: sql/db_components/oracle/arf_dcs_view_ddl.sql

    Info my Feb 23 14:16:21 BRT 2015 1424711781085/atg/dynamo/dbsetup/job/DatabaseJobManager starting work of setting up data 1424711781085.

    Error my Feb 23 14:16:21 BRT 2015 1424711781516 /atg/dynamo/dbsetup/database/DatabaseOperationManager---java.sql.SQLException: ORA-00439: feature not enabled not: Advanced replication

    is there a solution?

    Hello

    We have not tested and certified with Oracle XE internally

    You must use an Oracle Enterprise Edition for Advanced Replication

    What version of Oracle trade you install, I can't say extract of newspaper, you've posted

    ++++

    Thank you

    Gareth

    Please indicate any update as "Good response" or "Useful answer" If this update help and answers your question, so that others can identify the correct/good update between the many updates.

  • Integration Hub - Eloqua of marketing data warehouse need?

    We believe that in the heart of Eloqua real company, it is often necessary to embrace existing platforms and architecture, there is great need for marketing middleware, meeting between IT and Marketing systems.

    Thus, we have developed the integration Marketing CleverTouch Hub, allowing users of Eloqua dynamically integrate their data warehouse and the existing COMPUTER infrastructure.

    Is there a requirement for such a product, providing business space?

    To date, we were delighted with enthusiasm of Eloqua and support, the company real thought, but we would like to get comments from the end-user and community partners also.

    http://bit.LY/gfbEiI

    YES!  Something is needed to fill this gap in enterprise-class reports.  Please tell us more.

  • E.M.P. 11.1.2 Essbase data warehouse Infrastructure

    Hello

    We'll implement Hyperion Planning 11.1.2 and we intend having the data warehouse, pushing the budget data for Hyperion Planning and have push Hyperion and retrieve data in Essbase.  It is, has she a sense also push and pull data from essbase in the data warehouse? To make it more clear, we take the budget data from the data warehouse, and he will push to Hyperion Planning.  Budgetary data provided will be also pushed Essbase data warehouse.  Hyperion Planning will then do the what if analysis and then push back to essbase with essbase here will push the hypothetical scenarios back to the manipulated data warehouse.

    Please let me know if the script need for clarification.

    Thank you

    I did something similar in the past the concept is perfectly feasible.

    See you soon

    John

    http://John-Goodwin.blogspot.com/

  • Removal of host SDR data warehouses

    Hello

    I have a scenario where I want to migrate to a host and the VM between two clusters. I unfortunately had a number of DTS data warehouses attached to the host, no virtual machine is running from these storages of data on this host... I'm trying to disassemble stores but as expected get the error indicating that data warehouses are part of a cluster of EIM.

    Is it possible to no disturbance delete Host data warehouses.

    Thank you

    Steve

    So, what you want, remove some cluster DTS data warehouses and pass a host of a DRS cluster to another's is?

    Try the following steps:

    (1) remove the DTS cluster data warehouses, just move the data store outside the cluster for DTS;

    (2) move the host to the new DRS cluster target, if you have virtual machines running on it and cannot put the host in maintenance mode, you will need disconnect host, remove from the inventory and then add the host again to the new cluster.

    (3) If you wish, add data warehouses to the DTS of the new DRS cluster cluster.

Maybe you are looking for