Statistics by weeks

Hi all

I have a list of rental of the room throughout the year (a year). Each stay has a check-in and departure date. There may be several stays on the same week.

I am interested of course in number of nights spent in the place. I want to scan according to the week, the site of the occupation and its percentage.

From week 1 to week 53 starting Monday.

Each a different length stay: it can be 1 night or 10 or more nights.

The number of nights stay = c/o-c/i

If the two c / I and a/s are included in the week W, it means (i/o-c/c) nights.

OR if c / i < W-start and c/o > W - end, its length is 7 nights in week W, rest in the County of W-1 or W + 1

OR if etc...

The analysis of the problem is not that difficult.

The thing is to use the formulas adapted to the problem: I don't want to get into a cascade of IF; and I want to make the sum of these cumulative nights.

Essentially of 53 rows and 1 column is the sum of nights included in the reporting week.

I tried SUMIFS. But quite difficult to manage because all aguments inside must follow all conditions at the same time.

Maybe I should use OR (SUMIFS,...)

Next to this, the difficult point is to understand what are the dates of W1, W2, to the cell etc. in 2015, 2016, 2017...

It is very easy to go to date to the number of the week with the specific function in number. But the reverse is quite difficult.

Any idea is welcome.

Lopez

Hey Lopez,

This looks like an interesting problem. Would it be possible for you could post details with an example of the desired results, perhaps with a screenshot?

SG

Tags: iWork

Similar Questions

  • How to get the statistics of all the virtual machines that is going down the network packet last week?

    Hello

    I have this script (new) to Mr. LucD grace who 'supposed' to collect the list of any package of network statistics dropped as scheduled:$start = Get-Date ' 01/01/2015 00:00 "

    $finish = Get-Date "21/01/2015 14:30"
    
    $metrics = "net.droppedRx.summation","net.droppedTx.summation"
    foreach($esx in (Get-VMHost)){
        $vms = Get-VM -Location $esx
        if($vms){
            Get-Stat -Entity $vms -Stat $metrics -Start $start -Finish $finish -ErrorAction SilentlyContinue |
            where {$_.Instance -ne ""} |
            Group-Object -Property {$_.Entity.Name,$_.Instance} | %{
                $_.Group | Group-Object -Property Timestamp | %{
                    New-Object PSObject -Property @{
                        VMHost = $esx.Name
                        VM = $_.Group[0].Entity.Name
                        VmNic = $_.Group[0].Instance
                        "Receive Dropped Packets" = $_.Group | where {$_.MetricId -eq "net.droppedRx.summation"} | Select -ExpandProperty Value
                        "Transmit Dropped Packets" = $_.Group | where {$_.MetricId -eq "net.droppedTx.summation"} | Select -ExpandProperty Value
                        Timestamp = $_.Group[0].Timestamp
                        "Interval (seconds)" = $_.Group[0].IntervalSecs
                    }
                }
            }
        }
    } | Export-CSV -path C:\Temp\Result.CSV
    

    but somehow there is no results coming out of this script?

    How can I get the result the. CSV file?

    The ForEach loop do not put anything in the pipeline, but that can be fixed by using the call operator (and)

    $finish = get-Date ' 21/01/2015 14:30 '.

    $metrics = "net.droppedRx.summation", "net.droppedTx.summation".

    & {foreach ($esx in (Get-VMHost)) {}

    $vms = get-VM-location $esx

    {if ($VMS)}

    Get-Stat - $vms - Stat $metrics entity - start $start - finishing $finish - ErrorAction SilentlyContinue |

    where {$_.} {Example - don't ""} |

    Group-object - property {$_.} @entity.name, $_. Instance} | %{

    $_. Group | Group-object - property Timestamp | %{

    New-object PSObject-property @ {}

    VMHost = $esx. Name

    VM = $_. Group [0]. @entity.name

    VmNic = $_. Group [0]. Instance

    "Receive ignored packages" = $_. Group | where {$_.} MetricId - eq "net.droppedRx.summation"} | Select value - ExpandProperty

    "Transmit ignored packages" = $_. Group | where {$_.} MetricId - eq "net.droppedTx.summation"} | Select value - ExpandProperty

    Timestamp = $_. Group [0]. Timestamp

    'Interval (seconds)' = $_. Group [0]. IntervalSecs

    }

    }

    }

    }

    }} | Export-CSV-path C:\Temp\Result.CSV

  • 11g: additional statistics - effects of bad implementation

    I have a database with partitioned tables - based date (weekly partitions).

    For tables is to use additional statistics.

    However I miss updates from user_tab_col_statistics.

    Whenever the statistics had been made to all the partitions, then something is available in user_tab_col_statistics - with correct last_analyzed.

    However for partitioning, it has always generated a few partitions for the future in advance.

    Those that are empty.

    Intentionally a regular job deletes statistics for those who, assuming they would screw up the total statistics (because these partitions are empty - so not the "regular" partitions).

    When this job was handled and delete the statistics for the future partitions, then user_tab_col_statistics is empty again!

    It's actually a little more complicated/worst:

    (a) certain tables have no entry in user_tab_col_statistics.

    But (b) certain tables have old entries in user_tab_col_statistics (old value of last_analyzed).

    When to all the partitions is applied dbms_stat.gather_table_stats (...), then new data is available in user_tab_col_statistics for both of them a and b))

    If now the statistics for the future partitions are deleted, then

    (a) for the tables, which had not entered into user_tab_col_statistics have no entry here once again

    (b) for tables, who had old entries in user_tab_col_statistics have the old entries again (?).

    For the study I discovered, that INCREMENTAL table pref is FALSE.

    I think that it is false, trying to use additional statistics.

    Can anyone confirm the strange behavior of this setting wrong?

    -Thanks a lot!

    Best regards
    Frank

    Use partition INTERVAL in 11g function, you just need to create only one partition, as the above script.

    Do more, YOU DO NOT prepare for the future partitions, so you MUST NOT delete statistics.

  • collect statistics for the tablespace

    Friends...

    OS: Linux

    DB: 11 GR 2

    Data size: 1 TB

    I spend monthly multiple partitioned table spaces and bring together in a single annual partition. (for example tbs_2014_01, tbs_2014_02 - tbs_2014_12... all combine them into tbs_2014 as a tablespace)

    Over the weekend, work of database gets executed that collects statistics that are obsolete, it collects all the segments that have been moved from the storage.

    Given that the collection of statistics at the end of the week takes too long, I tried to find a smart way to collect statistics after each tablespace move rather than waiting for job to weekend which will take two or three days to complete.

    1. is there a way to gather statistics at the tablespace level and collect statistics for all objects in this table space?

    2. how to determine the overall stats of collection of statistics part?

    That is, suppose I have move the tbs_2014_01 tablespace and collect statistics with global stats that could take 2 hours but it will be difficult to spend 2 hours for each stats global tablespace which in my opinion is not good and we should be collecting global stats only once.

    3. any other advice?

    977272 wrote:

    @sol.beach... Thanks for your comments...

    I've not been asked to collect statistics to the tablespace but level to collect statistics after that finish objects move in storage.

    Given the size of the data, it is difficult to gather all the statistics at the weekend so trying to understand another method to collect the statistics the weekend load will be less.

    You can collect statistics object on an object by object basis level after that each object has been moved.

  • Clarification on the collection of statistics

    Hello

    My very slow query performns and it contains the tables of partitions and uses parallel and index.

    Last_analysed columns shows that he is 11 - July (2 weeks)

    What statistics to be gathered every day?

    What will be the timeduration?

    S

    Also check the STALE_STATS column in the views of the statistical dictionary (ALL_TAB_STATISTICS, ALL_IND_STATISTICS).

    The optimizer statistics management

    This indicates when there was enough DML operations on the subject to warrant an update of statistics. The auto night stats collection work should give priority to update statistics on these objects.

  • How will I know when to run the oracle table statistics?

    I have a situation Here.Several weeks back, one of my application developer complained about the performance issue in one of the application. I looked at the table statistics, statistics on the table were outdated and ran the table statistics, query ran a lot faster. The same problem occurred in another application, and hoping that oracle will take the right decisions based on the statistics, I ran the statistics for the tables involved in the query, but this time things got worse? Why is this? According to oracle documentation the optimizer must have updated statistics on the tables and indexes that surround these tables, so how do we decide when to run the statistics on the tables by making sure that we have no worse things. Thank you

    You don't tell us your version of db, but if you're on 10g and above, Oracle marks as out-of-date statistics when the RASSI percentage reaches or crosses the 10%. 11.1 from automatic work, managed by the ABP process runs nightly 10om at 02:00 on weekdays (can't recall for weekends) and supports the same. The question, when to collect the statistics is actually very subjective in your environment. Most of the time, it would be a weekly or a night and work in a data warehouse, in the next window ETL. You would need to play with the amount of the percentage of blocks of the table before you would be able to come to percentage that can work for your plans in a good way. I also saw the return of cardinality plays a large role in the exercise sometimes. You can watch here as well.

    HTH

    Aman...

  • database of vCenter 5.5 device has quadrupled since patch 5 weeks ago

    Since my camera vCenter update, its postgreSQL database has increased by 250% in size to fill the partition of 60 GB in two weeks - he had been stable at about 20-30GB for a year or two.  I discovered after I ran out of space - the DB showed 100% usage and vCenter wouldn't start.  I have expanded the disc and added 50 GB (110 GB total) on the partition to get things working again, but it filled again 3 weeks later.

    I'm on v5.5.0.20400 now, the previous version was 5.5.0.10300.  I have 32 computers guests and 200.

    I tried to manually vacuum the database, but only won a 3% additional disk space. (97%)

    I deleted temporarily statistics collection and retention of the data base to 1 day and got leeway with another 10% space after a reboot (up to 87% of 110 GB).

    I would like to know if there is an upper limit of this growth so that I can plan (maybe VMware is adding more features to prepare for the upgrade to vSphere 6.0?) or if it is a bug with no upper limit to growth or if I can do or something else to shrink DB.

    Resolved:-towers to be out of log files control in/storage/db/vpostgres/pg_log for the update, I installed.

    Solution is here:

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalID=2092127

  • Script of statistics VM does not show the good name of the Cluster.

    I tried to add a column to display the cluster virtual machine which is, but appearently I hurt him.  The column lists a cluster, but it's evil, and it's the same for all virtual machines.

    #####################################
    # Statistics virtual machine VMware #.
    #####################################

    function {VM-statavg ($StatStart, $StatFinish, $vmImpl, $statId)
    $stats = $vmImpl | Get-stat - Stat $statId - intervalmin 120 - Maxsamples 360'
    -Launch $StatStart - finishing $StatFinish
    $statAvg = "{0, 9:.00 #} ' f ($stats |) Measure - Object value - average) subject
    $statAvg
    }
    # Eve report
    $DaysBack = 30 # number of days to return back
    $DaysPeriod = 30 # number of days in the interval
    $DayStart = (get-Date). Date.AddDays(-$DaysBack)
    $DayFinish = (get-Date). Date.AddDays (-$DaysBack + $DaysPeriod).addminutes(-1)
    # Report for the previous week
    $DaysBack = 7 # number of days to return back
    $DaysPeriod = 7 # number of days in the interval
    $WeekStart = (get-Date). Date.AddDays(-$DaysBack)
    $WeekFinish = (get-Date). Date.AddDays (-$DaysBack + $DaysPeriod).addminutes(-1)
    $report = @)
    Get - vm | Sort name | % {
    $vm = get-view $_.ID
    $vms = "" | Select-Object VMName Hostname, Cluster, MonthAvgCpuUsage, WeekAvgCpuUsage, VMState, TotalCPU, TotalMemory, MonthAvgMemUsage, WeekAvgMemUsage, TotalNics, ToolsStatus, ToolsVersion
    $vms. VMName = $vm. Name
    $vms. Host name = $vm.guest.hostname
    $vms. Cluster = $Cluster.Name
    $vms. MonthAvgCpuUsage = VM-statavg $_ $DayStart $DayFinish "cpu.usage.average".
    $vms. WeekAvgCpuUsage = VM-statavg $_ $WeekStart $WeekFinish "cpu.usage.average".
    $vms. VMState = $vm.summary.runtime.powerState
    $vms. TotalCPU = $vm.summary.config.numcpu
    $vms. TotalMemory = $vm.summary.config.memorysizemb
    $vms. MonthAvgMemUsage = VM-statavg $_ $DayStart $DayFinish "mem.usage.average".
    $vms. WeekAvgMemUsage = VM-statavg $_ $WeekStart $WeekFinish "mem.usage.average".
    $vms. TotalNics = $vm.summary.config.numEthernetCards
    $vms. ToolsStatus = $vm.guest.toolsstatus
    $vms. ToolsVersion = $vm.config.tools.toolsversion
    $Report += $vms
    }

    $Report | ConvertTo-Html-title "VMware Virtual Machine statistics" - body "< H2 > VMware Virtual Machine statistics.» "< / H2 > ' | Out-file - add $filelocation

    Looks like you forgot to get the cluster.

    The first lines of the loop must be something like this

    ...

    Get - vm | Sort name | % {
    $vm = get-view $_.ID

    $cluster = get-Cluster - VM $_

    ....

  • Collects statistics for tables using jobs in Oracle 11 G 11.2.0.1.0?

    Hello

    My query is regarding the collection of statistics for tables.

    My Version of Oracle DB is:
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    AMT for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    In versions of prior oracle db, we used to schedule tasks to run on a daily basis for collecting statistics. Especially for tables that are frequent and huge inserts.

    I read that in 11g stats for all of the schema on a database are automatically make every night jobs. I checked these jobs and I see that they are running on a monthly basis [joined query]. This job is enabled and is scheduled to run monthly.
    This means that my diagram will be analyzed on a monthly basis. My understanding is correct?

    Can I still plan jobs to collect statistics for specific tables on every week? This will diminish the performance?
    We expect 100000 documents to insert on a daily basis.
    SELECT  JOB_NAME,  
       START_DATE, REPEAT_INTERVAL, 
         LAST_START_DATE, 
        NEXT_RUN_DATE,ENABLED
    FROM dba_scheduler_jobs
    WHERE job_name LIKE '%STAT%'
    ORDER BY 1;
    JOB_NAME                       START_DATE                             REPEAT_INTERVAL                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  LAST_START_DATE                        NEXT_RUN_DATE                          ENABLED
    ------------------------------ -------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------- -------------------------------------- -------
    BSLN_MAINTAIN_STATS_JOB        16-AUG-09 12.00.00.000000000 AM -07:00                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  14-APR-13 12.00.00.427370000 AM -07:00 21-APR-13 12.00.00.400000000 AM -07:00 TRUE    
    MGMT_STATS_CONFIG_JOB          15-AUG-09 12.24.04.694342000 AM -07:00 freq=monthly;interval=1;bymonthday=1;byhour=01;byminute=01;bysecond=01                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           01-APR-13 01.01.01.710280000 AM -07:00 01-MAY-13 01.01.01.700000000 AM -07:00 TRUE    
    Thank you
    Somiya

    Your understanding is not correct. These jobs are in dba_autotask_task.
    And they will be run every day.

    HTH

    -------------
    Sybrand Bakker
    Senior Oracle DBA

  • Restoration of host/VM statistics after add host to cluster

    After the removal of certain hosts in a cluster and then add, all the statistics of the virtual machine are gone.
    Is it possible to retrieve these stats?  He has them only for 'real time' and 'day', which is the time, since they have been in the cluster.
    Oh and keep in mind, when a definition of the alarm is selected in the right pane, a cluster can ALSO be highlighted in the left pane, so hitting delete, deletes the item currently highlighted, and this could be a cluster. Oops!  and right-click on the task in vSphere client, the Cancel option is grayed out!  (and yes there is a confirmation "Are you sure" after delete...)

    When you remove a host to vCenter, it assumes that the entity is more managed and you will lose all the performance statistics, including virtual machines. There is no way to restore this information, once added, vCenter will continue to collect and by the work of roll-up, you will begin finally to see your weekly, monthly, statistics etc.

  • ESX performance statistics

    I need help...

    After watching an interesting info from VMware on PowerCli session, I realized that I could get IOPS / s (and other performance statistics) via PowerCli ESX servers.

    I had a script that goes to each VMhost (ESX) and request the disk.commands.summation for each VM and the data store.

    Then I could summarize these take medium and gives me a report on how would IO.

    I had to go directly to the ESX rather than through vCenter, as the standard vCenter level stats had disk.commands.summation in real time; and the ESX retains at least a day.

    Later, I noticed that I could upwards the level Stats and get the OPS are / s of vCenter for a week...

    But trying to get the statistics of vCenter (via PowerCli) I get nothing.

    I did a Get-VMhost | Get-StatType... and I see the stat... but when I do Get-VMhost | Type get-Stat - disk.commands.summation returns nothing...

    Can someone please... I would like to know the OPS are / s for each VM and the data store for the past week, so I can properly balance my data stores.

    OK, let's try since last week.

    Get-VMHost  | Get-Stat -Stat disk.commands.summation -Start (Get-Date).AddDays(-7)
    

    What is this returns anything?

    One last thing, you are connected (connect-VIServer) to the vCenter and not the host ESX (i)?

    ____________

    Blog: LucD notes

    Twitter: lucd22

  • index statistics

    Hi all

    Each week, once we collect statistics on all schemas. But when I do
    SQL> select HEIGHT,BLOCKS from index_stats where name='FND_CONCURRENT_REQUESTS_N1';
    
    no rows selected
    
    SQL> select count(*) from index_stats;
    
      COUNT(*)
    ----------
             0
    
    1 row selected.
    The index_stats column has no lines and our env TYPICAL statistics_level chain

    Thank you
    Baskar.l

    You are logged in as APPLSYS? INDEX_STATS reports for the current schema.

    Hemant K Collette

    SQL> analyze index SALES_SALE_DATE_NDX validate structure;
    
    Index analyzed.
    
    SQL> select  * from index_stats;
    
        HEIGHT     BLOCKS NAME
    ---------- ---------- ------------------------------
    PARTITION_NAME                    LF_ROWS    LF_BLKS LF_ROWS_LEN LF_BLK_LEN
    ------------------------------ ---------- ---------- ----------- ----------
       BR_ROWS    BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN
    ---------- ---------- ----------- ---------- ----------- ---------------
    DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE   PCT_USED ROWS_PER_KEY
    ------------- ----------------- ----------- ---------- ---------- ------------
    BLKS_GETS_PER_ACCESS   PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
    -------------------- ---------- ------------ -------------- ----------------
             3       2816 SALES_SALE_DATE_NDX
                                      1000000       2653    19000000       7996
          2652          6       37063       8028           0               0
          1000000                 1    21261556   19037063         90            1
                       4          0            0              0                0
    
    SQL> analyze index SALES_SALE_DATE_NDX validate structure online;
    
    Index analyzed.
    
    SQL> select  * from index_stats;
    
        HEIGHT     BLOCKS NAME
    ---------- ---------- ------------------------------
    PARTITION_NAME                    LF_ROWS    LF_BLKS LF_ROWS_LEN LF_BLK_LEN
    ------------------------------ ---------- ---------- ----------- ----------
       BR_ROWS    BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN
    ---------- ---------- ----------- ---------- ----------- ---------------
    DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE   PCT_USED ROWS_PER_KEY
    ------------- ----------------- ----------- ---------- ---------- ------------
    BLKS_GETS_PER_ACCESS   PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
    -------------------- ---------- ------------ -------------- ----------------
             3       2816 SALES_SALE_DATE_NDX
                                      1000000       2653    19000000       7996
          2652          6       37063       8028           0               0
          1000000                 1    21261556   19037063         90            1
                       4          0            0              0                0
    
    SQL> analyze index SYS_C009094  compute statistics;
    
    Index analyzed.
    
    SQL> select  * from index_stats;
    
        HEIGHT     BLOCKS NAME
    ---------- ---------- ------------------------------
    PARTITION_NAME                    LF_ROWS    LF_BLKS LF_ROWS_LEN LF_BLK_LEN
    ------------------------------ ---------- ---------- ----------- ----------
       BR_ROWS    BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN
    ---------- ---------- ----------- ---------- ----------- ---------------
    DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE   PCT_USED ROWS_PER_KEY
    ------------- ----------------- ----------- ---------- ---------- ------------
    BLKS_GETS_PER_ACCESS   PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
    -------------------- ---------- ------------ -------------- ----------------
             3       2816 SALES_SALE_DATE_NDX
                                      1000000       2653    19000000       7996
          2652          6       37063       8028           0               0
          1000000                 1    21261556   19037063         90            1
                       4          0            0              0                0
    
    SQL> set pages600
    SQL> set linesize 132
    SQL> disconnect
    Disconnected from Oracle Database 10g Enterprise Edition ....
    SQL> connect
    Enter user-name: user/password
    Connected.
    SQL> l
      1* select  * from index_stats
    SQL> /
    
    no rows selected
    
    SQL>
    

    In addition, it does not need to VALIDATE an ONLINE STRUCTURE:

    SQL> l
      1* select  * from index_stats
    SQL> /
    
    no rows selected
    
    SQL> analyze index SALES_SALE_DATE_NDX validate structure online;
    
    Index analyzed.
    
    SQL> select  * from index_stats;
    
    no rows selected
    
    SQL>  analyze index SALES_SALE_DATE_NDX validate structure;
    
    Index analyzed.
    
    SQL> select  * from index_stats;
    
        HEIGHT     BLOCKS NAME                           PARTITION_NAME                    LF_ROWS    LF_BLKS LF_ROWS_LEN LF_BLK_LEN
    ---------- ---------- ------------------------------ ------------------------------ ---------- ---------- ----------- ----------
       BR_ROWS    BR_BLKS BR_ROWS_LEN BR_BLK_LEN DEL_LF_ROWS DEL_LF_ROWS_LEN DISTINCT_KEYS MOST_REPEATED_KEY BTREE_SPACE USED_SPACE
    ---------- ---------- ----------- ---------- ----------- --------------- ------------- ----------------- ----------- ----------
      PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS   PRE_ROWS PRE_ROWS_LEN OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
    ---------- ------------ -------------------- ---------- ------------ -------------- ----------------
             3       2816 SALES_SALE_DATE_NDX                                              1000000       2653    19000000      7996
          2652          6       37063       8028           0               0       1000000                 1    21261556   19037063
            90            1                    4          0            0              0                0
    
    SQL>
    

    Published by: Hemant K grapple on June 10, 2010 10:42

    Published by: Hemant K grapple on June 10, 2010 10:44

  • Disable the automatic optimizer statistics

    Hello

    I wanted to request user_tab_modifications track, the number of rows updated in a week. Because this view is updated automatically when the automatic optimizer statistics collects statistics, I have disabled the automatic optimizer statistics. Now, I will carry out run dbms_stats. FLUSH_DATABASE_MONITORING_INFO(); manually to get the full view with the number of rows updated.

    My concern here is, I get the exact number of rows updated per week of user_tab_modifications in doing this? Also, is there anything that is also updated from this point of view outside of optimizer statistics that are collected on a table.

    Thank you

    You might try to write a few PLSQL yourself.

    What:

    SQL> create table count_X_updates (update_count number);
    
    Table created.
    
    SQL> insert into count_X_updates values (0);
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    SQL> create table X (col_1 varchar2(5), col_2 varchar2(5), col_3 number);
    
    Table created.
    
    SQL> insert into X values ('a','first',1);
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    
    SQL> create or replace trigger count_x_updates_trg
      2  after update of col_1,col_2,col_3
      3  on X
      4  for each row
      5  declare prev_cnt number;
      6  begin
      7  update count_X_updates set update_count = update_count+1;
      8* end;
    SQL> /
    
    Trigger created.
    
    SQL>  update x set col_1 = 'b', col_2='secnd',col_3=2;
    
    1 row updated.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> select * from count_X_updates;
    
    UPDATE_COUNT
    ------------
               1
    
    SQL>  update x set col_1 = 'c' where col_3=2;
    
    1 row updated.
    
    SQL> commit;
    
    Commit complete.
    
    SQL> select * from count_X_updates;
    
    UPDATE_COUNT
    ------------
               2
    
    SQL> select * from x;
    
    COL_1 COL_2      COL_3
    ----- ----- ----------
    c     secnd          2
    
    SQL>
    

    Note: This trigger code must be improved because
    a. several sessions could obtain the same value
    b. it introduced a serialization - several session waiting on a lock on the count_X_updates of the table line - effectively, which means that all other sessions to try to update X will wait (even if they are to be updated lines X) until each previous emits a COMMIT.

    Thus, this demo code is only to show you the PLSQL triggers. But may not be used in Production.

    Practice some PLSQL. Learn about the autonomous operations.

    Hemant K Collette

  • Collect statistics of scheme run

    Hello
    in eBS on Unix AIX, we experience a performance issue.
    In our research, we came across this note: MOS ID 744143.1
    who says that gathering should not be run on every day.
    The cause of the delay may be due to running it every night as is our case?
    Run us it everyday at night for all schemas
    Please clarify this!

    Hello

    This simultaneous program on weekly or monthly basis, depending on the load/change of data you have in your instance - see Appendix (Note: 168136.1 - how many times should gather schema statistics program run?) for more details.

    Kind regards
    Hussein

  • I've recently migrated my iPhotos to Photos, perfectly worked for a few weeks, but suddenly today, I can't open my photos in pictures

    I recently migrated from iPhoto to Photos worked perfectly and I used it for a few weeks, but today I can open my pictures suddenly not in pictures. What's wrong?

    No idea since we can't see you - you must provide details - why you can't open pictures? What is going on? What is you get the exact error message? What version of the operating system and Photos you have?

    and there's a photo for Mac forum, which is where you can ask questions Photos - I will ask to be moved your message

    LN

Maybe you are looking for