Migration of level diagram GG?

Our scheme has thousands of tables/indexes, etc.

Can I use GG to migrate schema level?

Yes, you can

Please take a look here for some ideas

http://gavinsoorma.com/2010/02/Oracle-GoldenGate-Tutorial-4-performing-initial-data-load/

Be sure to mark your answers questions once you get your answer in order to clean up the forum.

Greetings,
NACEUR

Tags: Business Intelligence

Similar Questions

  • Member migration security level

    I use file system option to migrate the planning application Life Cycle Management.

    However the security at the level of the members is not have migrated.

    I am able to export all the objects, other than the safety of Dimension members.

    Did I miss something...

    Please suggest...

    Hello

    Have you tried to export only the users on its own without anything else, you should get a file named users.xml produced.
    Have you tried it on another application, have you checked to see if there's anything in SharedServices_LCM.log

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • Audit level diagram 11g R2

    People,

    Please, consider the scenario.


    We have a user named 'TEST '. We granted SELECT privileges to test user on 'SCOTT. DEPT. I can ask questions without any problem to SCOTT. Table of DEPT by connecting with the TEST user.

    Then, I enable auditing as follows:

    SQL > AUDIT ALL OF test BY ACCESS;
    SQL > AUDIT SELECT TABLE, TABLE UPDATE, INSERT a TABLE, ALTER TABLE, DELETE TABLE TEST BY ACCESS;
    SQL > AUDIT PROCEDURE of EXECUTION OF test ACCESS;

    Audit is now working for the TEST user and give me the correct results when never say user TEST query any object within its own schema.


    The problem is that audit tables unable to provide cross access schema information. As if I have the query:


    SELECT * from the Department;

    The statement is, but while performing a query even compared to the SCOTT schema:

    SELECT * from scott.dept;


    It won't save any audit record, and we want to capture it.


    Need your suggestions here.


    Kind regards

    Ali

    He probably does.

    SQL > drop user audtest waterfall;

    Deleted user.

    SQL > create user audtest identified by audtest;

    Created by the user.

    SQL > grant connect to audtest;

    Total grant.

    SQL > grant select on tt.dept to audtest;

    Total grant.

    SQL > select table of audit by audtest;

    Full check.

    SQL > conn audtest/audtest @...

    Connected.

    SQL > select * from tt.dept;

    ID

    ----------

    10

    SQL > select username, timestamp, owner, obj_name, action of dba_audit_trail where username = 'AUDTEST' and owner is "TT";

    USERNAME OWNER OF TIMESTAMP OBJ_NAME ACTION

    ------------------------------ --------- ------------------------------ -------------------- ----------

    AUDTEST FEBRUARY 3-16 TT DEPT 103

    1 selected line.

  • "Control terminals on component connector not on the level superior. block diagram" to comment on the report of the ADC

    Hi all

    Could someone enlighten me please, what does this comment on the value of the ADC

    "Terminals on component connector not on the block diagram of higher level of control.

    This means that some terminals is hidden within certain structures of the case and does not show not not the diagram without going into the structures of the case or by 'higher level block diagram', it means

    main.VI and main.vi controls must also be connected to the connector pane?

    Thank you

    K.Waris

    On the one hand, this means that they run on your screws VI Analyzer, since it is a warning in extenso you receive.  This means simply a terminal which is connected to the ConPane is not on the top level diagram, IE. within a structure of housing.

    As to why he is often not a good idea to do that read this classic thread:

    http://forums.NI.com/T5/LabVIEW/case-structure-parameter-efficiency/m-p/382516#M191622

  • Sleeping Sync Application with 2 RAC node after migration of DB level

    Dear all,

    I migrated my level of database - 11.2.0.3 of single node (file system) to node 2 RAC - 11.2.0.3 (ASM disk group). Now I want to configure my APPS (R12.1.3) level with node 2 RAC, but I wonder how it do?
    I am also to add a node more to inorder of application level to set up PCP and DNS load balancing.

    I already applied patches of pre-required on the Application tier before migrating to the 2-node RAC database layer.
    1. oracle E-Business Suite Release 12.1.1 Maintenance Pack Patch 7303030
    2 AutoConfig, apply R12. T2K. B.Delta 3 Patch 8919489
    3 9926448 patch to fix the known problem with FND_FS/SM alias generation with ANALYSIS is enabled.
    4 ANALYZE the functionality of automatic configuration of the receiver, apply either R12. T2K. B.Delta 3 [Patch 8919489] and R12. ATG_PF. B.Delta.3 [Patch 8919491], or 12.1.3 [Patch 9239090]

    I did the DB migration as core practice DBA. My level of DB is now on node 2 CARS. And the Application Tier (single node) is still the same old server... need to synchronize with the 2 RAC node and add the additional Application layer node so that the CFP and DNS can be configured.

    Please suggest.

    Kind regards
    Aleem

    Aleem,

    You see errors in the log database file after the database is started and before you access the application?

    Can you drop the temp tablespace and create a new one and see if you get the same error?

    Please see the following documents.

    Bug 9414040 - at the RAC, a temporary file can be used until it is finished creation (ORA-1157) [ID 9414040.8]

    Getting ORA-01157: cannot NOT IDENTIFY/LOCK DATA FILE 37 when datafile is in ASM [978335.1 ID]

    Thank you

    Hussein

  • Wire terminal in diagram of Sub - VI Analyzer

    When I run the VI Analyzer I get the following results

    Class wired in diagram of sub-

    That terminal control 'path of the log file' does not reside on the upper level diagram.   To avoid memory copies useless, control and indicator of the terminals which are wired on the part of the connector must be placed on the top level diagram.

    This brings me to the question-

    Should the control terminals are located outside the case Error statement or between the case of the error and the case of the state machine?

    VI Analyzer is pleased that when the terminals are outside the case of the error.

    Jim

    The 'proper' method is to do like the VIAnalyzer.

    "Wrap-your-head-in-duct-tape", then read this thread.

    I HOPE students to review who would know, but I can't speak for the 'right answer '.

    Ben

  • Migration Oracle EBS R12.0.6 from 32 to 64-bit

    Hello

    We run oracle eBusiness suite R12.0.6 in windows 2003 32-bit server and its database in windows 2003 server 64-bit.
    We plan to migrate the level to a widow of 64-bit apps.

    Please guide me with a good and easy walking distance.
    Thank you

    George wrote:
    Hello

    We run oracle eBusiness suite R12.0.6 in windows 2003 32-bit server and its database in windows 2003 server 64-bit.
    We plan to migrate the level to a widow of 64-bit apps.

    Please guide me with a good and easy walking distance.
    Thank you

    Please refer to the (migration Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 [1188535.1 ID]) - Please make sure that you are running a version of OS 64-bit supported Windows (as stated in the doc).

    Thank you
    Hussein

  • Migrate from Essbase 7.1.5 to Essbase system 9 - help urgently needed please!

    Hello Experts,

    Need your help in answering the following questions

    1 can migrate the contours running on 7.1.5 system 9?
    2. What are the consequences attached to it?
    3. what needs everyone to migrate?
    4. How can we make, who would be the next steps?
    5. can you Administration Services, Essbase, both running on Windows 2003 SP1 (Enterprise Edition Server) 64-bit, processor itanium 2 and on the same machine?
    6. could you please give the main differences between 7.1.5 and 9.3.1.

    Environment:

    Operating system: Sun Solaris SPARC (32 bit)

    Platform Version: 8

    Thank you

    Sonu

    Published by: 637223 on January 22, 2009 01:35

    1 can migrate the contours running on 7.1.5 system 9?
    Yes

    2. What are the consequences attached to it?
    Nothing

    3. what needs everyone to migrate?
    . OTL .csc, .rep, data and other objects essbase

    4. How can we do of it1. Can migrate the contours running on 7.1.5 system 9?
    Yes

    4. How can we make, who would be the next steps?
    You can migrate 1) level 0 data or all data.

    5. can you Administration Services, Essbase, both running on Windows 2003 SP1 (Enterprise Edition Server) 64-bit, processor itanium 2 and on the same machine?
    Yes... But its always good to keep essbase separated from other

    6. could you please give the main differences between 7.1.5 and 9.3.1.
    important will be security. It can be shared or native services. You can find some functions...,

    I hope this helps.
    www.dornakal.blogspot.com
    [email protected]

    Edited by: Dornakal on January 27, 2009 10:17

  • EBS 11i of NT to Linux migration

    Hi friends,

    We want to migrate our EBS 11i of windows NT to IBM rs-6000 AIX 5.2.
    I found these Metalink Notes related to this activity (but under linux):

    Doc ID: Note: 238276.1 (Migration of level apps)
    Doc ID: Note: 230627.1 (Level Db migration)

    Can I use this documentation? since they are two unix flavors? Is it specific to AIX?
    This note is updated? or am I on the right track to follow this note?
    I there other (additional) procedure I need to check?


    Thank you very much

    Thank you very much my hero :)

    You are welcome.

    How can I give excellent points for you?

    If the above is useful, you can mark it as good/useful.

  • User IMP in a different database schema

    Scenario of
    =====
    Oracle 9i Enterprise Edition (9.2.0.8)
    Windows Server 2003 (32-bit)
    --- to ---
    Oracle 11g Enterprise Edition (11.2.0.3)
    Windows Server 2008 R2 Standard (64-bit)


    Hi all

    I'm doing an upgrade from 9i to 11g and I use native imp/exp to migrate these data... I exported 9i patterns in a dmp file and I'm goiing to export in 11g. Database 11g is a new database with any of the existing patterns...

    My newbie question is "should I manually create these schema exact 11g and their tables before importing data? If so, there are many diagrams and objects to create... one by one... This is the tedious problem that I face...
    Is there a work around that? Thank you.

    Hello
    Exp/imp old creates no users unless you do a 'full' export (10 g impdp/expdp will do that for exports of level diagram). You need to first create all users if you exported from a list of schemas. Is it possible to export as "comprehensive"?

    Kind regards
    Rich

  • Existing network, analysis

    Hi Experts,

    It's a pretty open ended question. I need to analyse the existing infrastructure in a perspective of network I was wondering what would be the best approach to collect this information. It's a rather massive network / convoluted with documentation very less or dated. How to approach you if you were to walk in such a situation would be like this.

    Ta

    Announcement

    Hello Adeel,

    Assuming that access is available for all network components of the infra and none are configured for snmp querying or already part of a network card/inventory application, I would like to generally like below,

    (1) build a network from base level diagram and build ward [using 'sh ip int bri', 'sh cdp nei', "sh ip arp", 'sh mac addr' etc.] and continue to do so until the diagram has only devices/ISP devices at the edge interfaces.

    2) add on values [details Int, speeds, bandwidth etc avg] on the links above the said scheme.

    -If the infrastructure is too big, you can do it in the segments [Campus1, Campus2, branch, etc.]

    I hope this helps.

    Kind regards

  • ESX Cluster load - wise host...

    Hi all

    I have 2 groups with the ESX hosts.

    Now I need to get the report for each configuration of cluster and cluster has HOSTs.configuration... I want the report as report cluster resource and the cluster HOST... same

    as below

    NOMCLUSTER |  Total space (MB) | Available CPU (Mhz). Total of the CPU (Mhz).   Total physical memory (MB) |   Memroy available (MB)

    ____________________________________________________________________________________________________________


    Thks for LUKE providing report below. , I received this report of our vmware blogs, its rreport even this only for CLUSTERS

    I hope you understand my problem.

    appreciate your help

    Thank you

    ALDOVMWARE

    # Virtual Center Server
    $VCServerName = 'my servername. "
    $creds = get-Credential
    # Some variables
    $portvc = "443".
    $VC = connect-VIServer-Server $VCServerName - Credential $creds - ErrorAction Stop - port $portvc
    $report = @)
    $clusterName = "MonitoringTestCluster".
    $report = foreach (Get-cluster-name $clusterName $cluster) {}
    $esx = $cluster | Get-VMHost
    # $ds = get-Datastore - VMHost $esx | where {$_.} Type - eq "VMFS" - and $_. Extensiondata.Summary.MultipleHostAccess}
    New-object PSObject-property @ {}
    VCname = $cluster. Uid.Split(':@') [1]
    DCname = (Get-Data Center-Cluster $cluster). Name
    NOMCLUSTER = $cluster. Name
    'Number of guests' is $esx. County
    'Total of processors' = ($esx: measure - InputObject {$_.}) Extensiondata.Summary.Hardware.NumCpuPkgs} - sum). Sum
    'Total core' = ($esx: measure - InputObject {$_.}) Extensiondata.Summary.Hardware.NumCpuCores} - sum). Sum
    'Ability to failover current CPU' is $cluster. Extensiondata.Summary.AdmissionControlInfo.CurrentCpuFailoverResourcesPercent
    'Ability to failover of current memory' is $cluster. Extensiondata.Summary.AdmissionControlInfo.CurrentMemoryFailoverResourcesPercent
    "Configured failover ability" = $cluster. Extensiondata.ConfigurationEx.DasConfig.FailoverLevel
    "The level of automation of migration" = $cluster. Extensiondata.ConfigurationEx.DrsConfig.DefaultVmBehavior
    'Recommendations of DRS' = & {$result = $cluster. Extensiondata.Recommendation | %{$_. Reason ;} If ($result) {[string]::Join(',',$result)}}}
    'DRS flaws' = & {$result = $cluster. Extensiondata.drsFault | %{$_. Reason ;} If ($result) {[string]::Join(',',$result)}}}
    'Migration threshold' is $cluster. Extensiondata.ConfigurationEx.DrsConfig.VmotionRate
    'hosts target loading standard deviation' = "NA".
    "Host current care gap" = "NA".
    'Total physical memory (MB)' = ($esx |) Measure-Object-MemoryTotalMB property-sum). Sum
    'Configured MB memory' = ($esx |) Measure-Object-MemoryUsageMB property-sum). Sum
    'Available Memroy (MB)' = ($esx |) Measure-object - InputObject {$_.} MemoryTotalMB - $_. MemoryUsageMB} - sum). Sum
    'Total CPU (Mhz)' = ($esx |) Measure-Object-CpuTotalMhz property-sum). Sum
    'Configured CPU (Mhz)' = ($esx |) Measure-Object-CpuUsageMhz property-sum). Sum
    'Available CPU (Mhz)' = ($esx |) Measure-object - InputObject {$_.} CpuTotalMhz - $_. CpuUsageMhz} - sum). Sum
    'Total of free space (MB)' = ($ds | where {$_.}) Type - eq "VMFS"} | Measure-Object-CapacityMB property-sum). Sum
    'Configured (MB) disk space' = ($ds |) Measure-object - InputObject {$_.} CapacityMB - $_. FreeSpaceMB} - sum). Sum
    'Disk space available (MB)' = ($ds |) Measure-Object-FreeSpaceMB property-sum). Sum
    }
    }
    $report | Export-Csv "Q:\Cluster-Report.csv" - NoTypeInformation - UseCulture

    Try this, it's the closest you can get.

    You cannot combine different objects (files) in a CSV file.

    # Virtual Center Server
    $VCServerName = "my servername" $creds = Get-Credential # Some variables
    $portvc="443" $VC = Connect-VIServer -server $VCServerName -Credential $creds  -ErrorAction Stop -port $portvc$report = @()
    $clusterName = "MonitoringTestCluster"foreach($cluster in Get-Cluster -Name $clusterName){
      foreach($esx in (Get-VMHost -Location $cluster)){
        $report += New-Object PSObject -Property @{
            VCname = $cluster.Uid.Split(':@')[1]
            DCname = (Get-Datacenter -Cluster $cluster).Name        Clustername = $cluster.Name        VMHost = $esx.Name        "Number of hosts" = $esx.Count        "Total Processors" = ($esx | measure -InputObject {$_.Extensiondata.Summary.Hardware.NumCpuPkgs} -Sum).Sum        "Total Cores" = ($esx | measure -InputObject {$_.Extensiondata.Summary.Hardware.NumCpuCores} -Sum).Sum        "Current CPU Failover Capacity" = $cluster.Extensiondata.Summary.AdmissionControlInfo.CurrentCpuFailoverResourcesPercent        "Current Memory Failover Capacity" = $cluster.Extensiondata.Summary.AdmissionControlInfo.CurrentMemoryFailoverResourcesPercent        "Configured Failover Capacity" = $cluster.Extensiondata.ConfigurationEx.DasConfig.FailoverLevel        "Migration Automation Level" = $cluster.Extensiondata.ConfigurationEx.DrsConfig.DefaultVmBehavior        "DRS Recommendations" = &{$result = $cluster.Extensiondata.Recommendation | %{$_.Reason};if($result){[string]::Join(',',$result)}}
            "DRS Faults" = &{$result = $cluster.Extensiondata.drsFault | %{$_.Reason};if($result){[string]::Join(',',$result)}}
            "Migration Threshold" = $cluster.Extensiondata.ConfigurationEx.DrsConfig.VmotionRate        "target hosts load standard deviation" = "NA"        "Current host load standard deviation" = "NA"
            "Total Physical Memory (MB)" = ($esx | Measure-Object -Property MemoryTotalMB -Sum).Sum        "Configured Memory MB" = ($esx | Measure-Object -Property MemoryUsageMB -Sum).Sum        "Available Memroy (MB)" = ($esx | Measure-Object -InputObject {$_.MemoryTotalMB - $_.MemoryUsageMB} -Sum).Sum        "Total CPU (Mhz)" = ($esx | Measure-Object -Property CpuTotalMhz -Sum).Sum        "Configured CPU (Mhz)" = ($esx | Measure-Object -Property CpuUsageMhz -Sum).Sum        "Available CPU (Mhz)" = ($esx | Measure-Object -InputObject {$_.CpuTotalMhz - $_.CpuUsageMhz} -Sum).Sum        "Total Disk Space (MB)" = ($ds | where {$_.Type -eq "VMFS"} | Measure-Object -Property CapacityMB -Sum).Sum        "Configured Disk Space (MB)" = ($ds | Measure-Object -InputObject {$_.CapacityMB - $_.FreeSpaceMB} -Sum).Sum        "Available Disk Space (MB)" = ($ds | Measure-Object -Property FreeSpaceMB -Sum).Sum    }
      }
    }
    $report | Export-Csv "Q:\Cluster-Report.csv" -NoTypeInformation -UseCulture
    
  • RETURN type of function table

    Hello

    I read conflicting information about the return type that has a table function must or may use.

    First, I am a student of a book that says:

    Function in pipeline returns the data types:

    The main constraint for the pipeline functions, it is the return type must be a collection type autonomous which can be used in SQL - i.e. a VARRAY or table nested.

    and then in the next sentence...

    More precisely a pipeline function can return the following:

    A stand-alone nested table or VARRAY, defined at the schema level.

    A nested table or VARRAY that has been declared in a package type.

    This seems to go against the first quoted sentence.

    Now, before reading the above text I had done just my own test to see if a packed type would work because I thought I had read somewhere that it would not, and he does not (the test code and this output is at the end of this question). When I arrived in the text above, after my test, so I was naturally confused.

    So, I'm going to PL/SQL reference that says:

    RETURN data type

    The data type of the value returned by a function table in pipeline must be a type collection defined either at the level of schema or within a package (therefore, it cannot be a type of associative array).

    I tried to call a function that returns a collection of VARRAY type packaged in both SQL and PL/SQL (of course below is SQL all in any case) and no work.

    Now I'm wondering what is a TABLE function must use a schema type and a function table in pipeline can use a packaged type?  I see that I created and called a function table but examples of Oracle see the creation and use of a function table in pipeline.

    Edit: I should add that I read the following sentence in the SF book on p609 in * table functions: "this type of nested table must be defined as an element of level diagram, because the SQL engine must be able to resolve a reference to a collection of this kind."

    So that it begins to resemble table functions should return a schema type and pipelined table functions, perhaps because that they don't in fact return a collection, rather they return (RowSource) content, can use the schema types or types of packages. Is this correct?

    Can someone clarify this for me please?

    Thank you in advance,

    J

    CREATE OR REPLACE PACKAGE PKGP28M

    VAT-type is varray (5) number;

    END;

    /

    DISPLAY ERRORS

    create or replace type VAT is varray (5) number;

    /

    display errors

    create or replace function tabfunc1 return pkgp28m.vat as

    numtab pkgp28m.vat:=pkgp28m.vat();

    Start

    numtab.extend (5);

    because loop me in 1.5

    numtab (i): = trunc (dbms_random. Value (1.5));

    end loop;

    Return numtab;

    end;

    /

    display errors

    create or replace function tabfunc2 as return VAT

    numtab vat:=vat().

    Start

    numtab.extend (5);

    because loop me in 1.5

    numtab (i): = trunc (dbms_random. Value (1.5));

    end loop;

    Return numtab;

    end;

    /

    display errors

    exec dbms_output.put_line (' call tabfunc1 (returns the packaged type) :');)

    Select * from table (tabfunc1)

    /

    exec dbms_output.put_line (' call tabfunc2 (returns the type of schema) :');)

    Select * from table (tabfunc2)

    /

    declare

    RC sys_refcursor;

    number of v;

    Start

    dbms_output.put_line (' in anonymous block1 - open rc to select in the table (tabfunc1) (returns the packaged type) :');)

    Open rc to select table column_value (tabfunc1);

    loop

    extract the rc in v;

    When the output rc % notfound;

    dbms_output.put_line (' > ' | to_char (v));

    end loop;

    close the rc;

    end;

    /

    declare

    RC sys_refcursor;

    number of v;

    Start

    dbms_output.put_line (' in anonymous block2 - open rc to select in the table (tabfunc2) (returns the type of schema) :');)

    Open rc to select table column_value (tabfunc2);

    loop

    extract the rc in v;

    When the output rc % notfound;

    dbms_output.put_line (' > ' | to_char (v));

    end loop;

    close the rc;

    end;

    /

    Scott@ORCL > @C:\Users\J\Documents\SQL\test29.sql

    Package created.

    No errors.

    Type of creation.

    No errors.

    The function is created.

    No errors.

    The function is created.

    No errors.

    the call of tabfunc1 (returns the packaged type):

    PL/SQL procedure successfully completed.

    Select * from table (tabfunc1)

    *

    ERROR on line 1:

    ORA-00902: invalid data type

    the call of tabfunc2 (returns the type of schema):

    PL/SQL procedure successfully completed.

    COLUMN_VALUE

    ------------

    1

    4

    1

    1

    3

    In anonymous block1 - open rc to select in the table (tabfunc1) (returns the packaged type):

    declare

    *

    ERROR on line 1:

    ORA-00902: invalid data type

    ORA-06512: at line 6

    In anonymous block2 - open rc to select in the table (tabfunc2) (returns the type of schema):

    > 1

    > 2

    > 4

    > 2

    > 3

    PL/SQL procedure successfully completed.

    Post edited by: Jason_942375

    But the compilation of the PIPELINED WILL CREATE the schematic function types automatically. And the TABLE function, applied to the PIPELINED function, use these types of hidden patterns.

  • Can I plug a tablespace in a different database?

    Version:11.2.0.3/RHEL 5.8

    We have a database of non-prod with a weekly backup of level diagram using datapump. This DB contains patterns QA which is essential to our versions of the product.

    The system tablespace data file has been corrupted. After the last logical backup taken with expdp 5 days back, many changes were made for business schema objects. Then, restore the dumpfile expdp scheme will be a great help.

    We have a critical QA scheme called LMFS_QA that uses a tablespace (LMFS_QA_DATA) with 4 data files. All data files were fine when the DB went down because of the corruption of the system tablespace.

    Is there anyway that I can plug this tablespace in an another healthy database after the user LMFS_QA was created in the DB?

    Is there anyway that I can plug this tablespace in an another healthy database after the user LMFS_QA was created in the DB?

    Sorry to say but the short answer is no.  You can not just plug these data files to a different database.

    No doubt you are any rman backups, or you wouldn't be asking this question?  Critical databases must be in archivelog mode and backup using rman.

    Use your file expdp and re-apply your changes.

    Are there no way to recover your database "corrupt"?  You have connected a call with Oracle?

  • Cluster statistics

    Hello

    I have the script of report attached cluster where I am trying to record the following data and need help to get the right information through the commands and functions.

    This category names match the global information represented in the interface of VC Client

    • Summary tab
      • General
        • Total CPU resources
        • Total memory
        • Number of hosts
        • Total processors
          • Number of Sockets -?
          • Number of hearts -?
      • VMware HA
        • Current ability to failover CPU
        • Current capacity of Failvoer memory
        • Ability to failover configured
      • VMware DRS
        • Level of automation of migration
        • DRS recommendations
        • DRS defects
        • Migration threshold
        • loading standard deviation target hosts
        • Current host in charge of standard deviation
    • Resource Allocation tab
      • CENTRAL PROCESSING UNIT
        • Total capacity
        • Capacity reserved
        • Available capacity
      • Memory
        • Total capacity
        • Capacity reserved
        • Available capacity
    • Performance tab
      • View of host - by host computers
        • "n"days summary - % CPU.
        • "n" days - MB of memory summary
        • "n"summary days - drive ms.
        • Top 10 - CPU usage
        • Top 10 - memory consumed
        • Top 10 - disc (Kbps)
        • Top 10 - network (Mbit/s)

    Any help on getting this extra feature is very appreciated.

    Start with this

    $report = @()
    # $clusterName = "MyCluster"  $clusterName = "*"
    $report = foreach($cluster in Get-Cluster -Name $clusterName){
        $esx = $cluster | Get-VMHost
        $ds = Get-Datastore -VMHost $esx | where {$_.Type -eq "VMFS" -and $_.Extensiondata.Summary.MultipleHostAccess}
    
        New-Object PSObject -Property @{
            VCname = $cluster.Uid.Split(':@')[1]
            DCname = (Get-Datacenter -Cluster $cluster).Name
            Clustername = $cluster.Name
            "Number of hosts" = $esx.Count
            "Total Processors" = ($esx | measure -InputObject {$_.Extensiondata.Summary.Hardware.NumCpuPkgs} -Sum).Sum
            "Total Cores" = ($esx | measure -InputObject {$_.Extensiondata.Summary.Hardware.NumCpuCores} -Sum).Sum
            "Current CPU Failover Capacity" = $cluster.Extensiondata.Summary.AdmissionControlInfo.CurrentCpuFailoverResourcesPercent
            "Current Memory Failover Capacity" = $cluster.Extensiondata.Summary.AdmissionControlInfo.CurrentMemoryFailoverResourcesPercent
            "Configured Failover Capacity" = $cluster.Extensiondata.ConfigurationEx.DasConfig.FailoverLevel
            "Migration Automation Level" = $cluster.Extensiondata.ConfigurationEx.DrsConfig.DefaultVmBehavior
            "DRS Recommendations" = &{$result = $cluster.Extensiondata.Recommendation | %{$_.Reason};if($result){[string]::Join(',',$result)}}
            "DRS Faults" = &{$result = $cluster.Extensiondata.drsFault | %{$_.Reason};if($result){[string]::Join(',',$result)}}
            "Migration Threshold" = $cluster.Extensiondata.ConfigurationEx.DrsConfig.VmotionRate
            "target hosts load standard deviation" = "NA"        "Current host load standard deviation" = "NA"
            "Total Physical Memory (MB)" = ($esx | Measure-Object -Property MemoryTotalMB -Sum).Sum
            "Configured Memory MB" = ($esx | Measure-Object -Property MemoryUsageMB -Sum).Sum
            "Available Memroy (MB)" = ($esx | Measure-Object -InputObject {$_.MemoryTotalMB - $_.MemoryUsageMB} -Sum).Sum
            "Total CPU (Mhz)" = ($esx | Measure-Object -Property CpuTotalMhz -Sum).Sum
            "Configured CPU (Mhz)" = ($esx | Measure-Object -Property CpuUsageMhz -Sum).Sum
            "Available CPU (Mhz)" = ($esx | Measure-Object -InputObject {$_.CpuTotalMhz - $_.CpuUsageMhz} -Sum).Sum
            "Total Disk Space (MB)" = ($ds | where {$_.Type -eq "VMFS"} | Measure-Object -Property CapacityMB -Sum).Sum
            "Configured Disk Space (MB)" = ($ds | Measure-Object -InputObject {$_.CapacityMB - $_.FreeSpaceMB} -Sum).Sum
            "Available Disk Space (MB)" = ($ds | Measure-Object -Property FreeSpaceMB -Sum).Sum
        }
    }
    
    $report | Export-Csv "C:\Cluster-Report.csv" -NoTypeInformation -UseCulture
    

    It will produce most of the properties, except for the performance tab entries.

    I didn't understand what you wanted, the values presented in these graphs?

    Also note that the two properties of the deviation are marked as "NA".

    The exact method of calculation for these 2 has not been published as far as I know.

Maybe you are looking for

  • Satellite C50D - A - 13 G affects the WiFi devices

    Hi all, first post and a problem, like most I guess! Bought my first W8 portable Satellite C50D Tosh - A - 13 G Wifi is driving me crazy When it connects to my wifi BT, all the other designs the loss of home their wifi connection.This new laptop, "ho

  • HP 15-AB031TX: upgrade the OS and recovery

    I purchased the above model from Amazon, and it was delivered to me yesterday. Here are my questions: I'm eligible for upgrade to Windows 10?If I switch to Windows 10, will be the recovery partition that was created by default, become redundant?Where

  • Impossible to discover 104

    I have a 104 and it has just updated to 6.4.2 and now I can't connect to it via the readycloud portal or via the iOS app. I know that many people have had the same problems, but nothing has worked for me. I noticed that the serial number and MAC addr

  • El sistema me indica error 5008

    Como corregir 'Error 5008'

  • Monitor goes black, but only if Vista closed system (works fine with Linux!)

    My Vista system works AOK except the monitor displays no more than 2-3 seconds. However, it is on with a fixed green light and if I turn the monitor on and off I have the chance to see my login screen of Vista or office for 2-3 seconds. Then black ag