Data buffers Cache vs. data Retreival

Hello all-

I want to know how the increase / decrease in the effects of data cache, the size of the size of the buffer for data retreival?

I was going through the SER60 and he said:

* "When you retrieve data in the Essbase Add-in for Excel worksheet or use the report generator to retrieve data, Essbase uses the extraction buffer to optimize recovery" * "

There is an interdependence between cache data and memory data retreival buffer size?

Also in SER60 I found in the section "activating dynamic recovery buffer Sizing" as follows:

If a block of very large database and recoveries include a large percentage of cells in each block through several blocks, consider setting the option VLBREPORT in the essbase.cfg of Essbase configuration file.

Someone at - he used this command in the past and can made me know the advantages and disadvantages of it?

Thank you!

Important notes around the use of VLBREPORT

The parameter does not apply to databases of global storage.
The VLBREPORT configuration setting applies only to databases that contain no members dynamic Calc, attribute or dynamic time series.

Also the buffer cache and the recovery of the buffer are completely separated, because as far as I know

See you soon

John
http://John-Goodwin.blogspot.com/

Tags: Business Intelligence

Similar Questions

  • 100% ram used on Data-Guard

    Hello

    We test our application with Oracle data guard.
    Is version of Oracle 11 g R2 (11.2.0.2)
    The configuration of data-guard is a primary and a physical standby server.
    Both servers have 16 GB of RAM. It is set on an ESX Server as two virtual machines.
    Each vm has two processors with a speed 2127,902 MHz
    OS is Red Hat Enterprise Linux Server version 5.5 (Tikanga)
    It's an arch bit x86_64 machine.

    The problem we see is that the consumption of RAM is almost 100%. (97%)
    Earlier, the RAM has 4 GB, believed is not enough and went to 16 GB after we found that same 8 GB is not enough.
    Even after setting the RAM 16 GB, entire memory was used.
    The demand for good results, the answers are quick enough.
    On average, there are 12000 inserts per minute in the database.

    The CPU usage is about 25%.


    We are looking to reduce the consumption of memory less than 80% so that if other applications must be run on the Data-Guard allows him to run or use nearly 100% of memory during the peak or abnormal conditions.

    It is a facility dedicated with instance what a single database that is running on the firewall data.

    Here is the use of memory (in MB)
    < -.
    [root@vm-lnx-rds1174 logs] free no. m
    total used free shared buffers cached
    MEM: 16051 15650 401 0 267 13532
    -/ + buffers/cache: 1850 14201
    Swap: 8095 0 8095
    ->

    Here is the related configuration memory set to the oracle instance.
    < -.
    change the system shared_pool_size set = 0 scope = spfile;
    change the system db_cache_size set = 0 scope = spfile;
    change the system java_pool_size set = 0 scope = spfile;
    ALTER system set LARGE_POOL_SIZE is 0 scope = spfile;.
    change the system sga_max_size set = 0 scope = spfile;
    ALTER system set sga_target is 0 scope = spfile;.
    ALTER system set pga_aggregate_target is 0 scope = spfile;.
    ALTER system set MEMORY_TARGET = 8G scope = spfile;
    ALTER system set MEMORY_MAX_TARGET = 8G scope = spfile;
    ->

    Details of the Configuration of Data-Guard:
    < -.
    DGMGRL > see Configuring

    Configuration - DGConfig1

    Protection mode: MaxPerformance
    Databases:
    PNETs - primary
    PNET - Physical standby database

    Fast-Start Failover: DISABLED

    The configuration status:
    SUCCESS
    ->


    Please advice how we can reduce the memory consumption on the machine.

    Thank you
    Krishna

    Hello

    on Linux, high mem use is normal. Linux uses all the memory cache of the drive. You can compile this program to prove.

    #include 
    #include 
    #include 
    #include 
    
    int main(int ac, char **av)
    {
            long int n,m,i;
            long delay;
            char *buf;
    
            if(ac < 3) {
                    printf("Usage: %s  \n",av[0]);
                    exit(1);
            }
            sscanf(av[1],"%d",&n);
            sscanf(av[2],"%d",&delay);
            m=n*1024*1024;
            buf=malloc(n*1024*1024);
            // for(i=0;i
    

    and run it as follows:
    . 4000 10 a.out

    It will affect 4000 MB of ram 10 seconds. You'll see that once it's over, you have 4 000 MB of free ram. After awhile, this amount will be lower that linux will do IO operations.

    Paul

  • Options for monitoring or recovery of C-Series system performance statistics, series EX codecs

    Hello - I've been looking around to see if there is an option available today to recover the performance statistics of the system (for example, load CPU, memory, etc...) of TAA endpoint via the command line interface or dynamic XML Poll (http 'getxml'), of an endpoint.  I understand that Cisco has today a company MIB available for these endpoints and the API is needed to pull information from the codec, however, I want to be able to pull the CPU load and memory of these endpoints dynamically load.  Is it possible to do today?

    Thank you in advance,

    -Andrew

    Hello, Andrew!

    In general, it's something I wouldn't really worry about and be more skeptical if the

    follow-up itself generate an additional burden troubled =.

    As the systems are based on linux, you have command line tools and proc interface

    You can use to get stats on the endpoint. I'd rather watch if there are problems

    rather than trying to do something dynamic, I think that it is better accomplished by the endpoints.

    Example, login as root and try w or free

    [Martin-Koch-EX90 - ATEA-Vaas: ~] $ w

    00:17:02 14 days, 23 min, 1 user, load average: 0.00, 0.02 and 0.05

    ATS to [email protected]/ * / IDLE JCPU PCPU WHAT

    root pts/0 00:03 0.00 0.31 s s s 0.02 w

    [Martin-Koch-EX90 - ATEA-Vaas: ~] $ free

    total used free shared buffers cached

    MEM: 510912 309832 201080 0 25768 156616

    -/ + buffers/cache: 127448 383464

    Swap:            0          0          0

    proc interface is also interesting:

    [Martin-Koch-EX90 - ATEA-Vaas: ~] $ cat/proc/uptime

    1210174.87 1193259.96

    [Martin-Koch-EX90 - ATEA-Vaas: ~] $ cat/proc/loadavg

    0.00 0.05 0.01 25905 1/131

    [Martin-Koch-EX90 - ATEA-Vaas: ~] $ cat/proc/meminfo

    MemTotal: 510912 kB

    MemFree: 210312 KB

    Stamps: 25580 kB

    Cache: 156016 kB

    SwapCached: 0 kB

    Active principles: 70892 kB

    Inactive: 115896 kB

    ...

    You should be able to add a key ssh and auto login and get the information you need and

    process via a script and then use this data in your monitoring tool.

    and of course you have the xstatus which you can also access by tsh or the xmlapi http, you can find more information

    in the guide of the api. As you wrote, there isn't that much out via snmp.

    The intention itself is right though, so maybe you could file a feature request for the traps

    and better pro active monitoring, great thing miss me the most on C/TC is the lack of traffic/network/loss reports.

    Please response rate!

  • Resize the SWAP Partition.

    I want to resize my partition for swap of 1.4 GB GB 7.

    Some data from system below:

    Welcome to SUSE Linux Enterprise Server 11 for VMware (x86_64) - Kernel \r (\l) SP3.

    # uname - a

    Platinium2 3.0.76 - 0.11 - default #1 SMP Linux kill Jun 14 08:21:43 UTC 2013 (ccab990) x86_64 x86_64 x86_64 GNU/Linux

    # swapon s

    Size of the characters in filename used priority

    / dev/sda1 partition 1532924 0-1

    # fdisk-l

    Disk/dev/sda: 591,6 GB, 591631745024 bytes
    255 heads, 63 sectors/track, 71928 cylinders, total of 1155530752 areas
    Units is 1 * 512 sectors is 512 bytes
    Sector size (logical or physical): 512 bytes / 512 bytes
    Size of the e/s (minimum/maximum): 512 bytes / 512 bytes
    Disk identifier: 0x00033b0e

    Device boot start end blocks Id system
    / dev/sda1 2048 3067903 1532928 82 Linux swap / Solaris
    / dev/sda2 * 3067904 471859199 234395648 83 Linux
    / dev/sda3 471859200 1155530751 341835776 83 Linux

    # cat/etc/fstab

    / dev/sda1, swap swap defaults 0 0

    / dev/sda2 / ext3 acl, user_xattr 1 1

    proc/proc proc defaults 0 0

    Sysfs/sys sysfs noauto 0 0

    debugfs/sys/kernel/debug debugfs noauto 0 0

    devpts/dev/pts devpts mode = 0620, gid = 5 0 0

    / dev/sda3/local/apps ext3 noatime, acl, user_xattr 1 2

    # free k

    total used free shared buffers cached

    MEM: 16392148 1273944 15118204 0 149864 760472

    -/ + buffers/cache: 363608 16028540

    Swap: 1532924 0 1532924

    Thank you

    Paul

    If using ESXi 5.x, please check its: http://rickardnobel.se/esxi-5-0-partitions/

    If you use an earlier version of ESX, you can do the Linux systems.

  • ESXi 4.0 performance problem

    Hola a tod@s,


    I have a server with VMWare ESXi 4.0.0 (208167) con las caracteristicas material Editor, currently estoy sufriendo problemas performance, El alguna ayuda por the comunidad para yes some track...

    Currently solo tengo una VM running, el operating system are CentOS 5.5 con Cpanel (Bind, Proftpd FTP, HTTP, Apache, PHP, MySQL, PostgreSQL, Exim POP/SMTP).


    Start VM al servidor para con 4 CPU y tengo asignado toda capacidad of 4 GB of RAM.

    El primer problema, desde consoled her VM, cuando los verifico caracteristicas sucede algo extraño, solo me detecta 3 GB of RAM 4 GB vez, in cambio, las CPU en las detecta correctamente...: s


    root@cPanel [/] # cat/proc/cpuinfo
    Processor: 0
    vendor_id: GenuineIntel
    CPU family: 6
    model: 26
    model name: Intel(r) Core i7 CPU 930 @ 2.80GHz
    step by step: 5
    CPU MHz: 2792.983
    cache size: 8192 KB
    fdiv_bug: no
    hlt_bug: no
    f00f_bug: no
    coma_bug: no
    FPU: Yes
    fpu_exception: Yes
    CPUID level: 11
    WP: Yes
    flags: fpu vme pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2 popcnt lahf_lm nonstop_tsc ida [8]
    BogoMips: 5585.96


    Processor: 1
    vendor_id: GenuineIntel
    CPU family: 6
    model: 26
    model name: Intel(r) Core i7 CPU 930 @ 2.80GHz
    step by step: 5
    CPU MHz: 2792.983
    cache size: 8192 KB
    fdiv_bug: no
    hlt_bug: no
    f00f_bug: no
    coma_bug: no
    FPU: Yes
    fpu_exception: Yes
    CPUID level: 11
    WP: Yes
    flags: fpu vme pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2 popcnt lahf_lm nonstop_tsc ida [8]
    BogoMips: 5596.34


    Processor: 2
    vendor_id: GenuineIntel
    CPU family: 6
    model: 26
    model name: Intel(r) Core i7 CPU 930 @ 2.80GHz
    step by step: 5
    CPU MHz: 2792.983
    cache size: 8192 KB
    fdiv_bug: no

    hlt_bug: no
    f00f_bug: no
    coma_bug: no
    FPU: Yes
    fpu_exception: Yes
    CPUID level: 11
    WP: Yes
    flags: fpu vme pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2 popcnt lahf_lm nonstop_tsc ida [8]
    BogoMips: 5644.43


    Processor: 3
    vendor_id: GenuineIntel
    CPU family: 6
    model: 26
    model name: Intel(r) Core i7 CPU 930 @ 2.80GHz
    step by step: 5
    CPU MHz: 2792.983
    cache size: 8192 KB
    fdiv_bug: no
    hlt_bug: no
    f00f_bug: no
    coma_bug: no
    FPU: Yes
    fpu_exception: Yes
    CPUID level: 11
    WP: Yes
    flags: fpu vme pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2 popcnt lahf_lm nonstop_tsc ida [8]
    BogoMips: 5607.93


    root@cPanel [/] # free
    total used free shared buffers cached
    MEM: 3107536 2982092 125444 0 107176 1984144
    -/ + buffers/cache: 890772 2216764
    Swap: 2096472 124 2096348

    El segundo problema, es el performance of los discos duros, creo estos deferred without normal son:

    root@cPanel [/] # / sbin/hdparm-Tt/dev/sda


    / dev/sda:
    Timing cached reads: 30332 MB in 2.00 seconds = 15203,59 MB / s
    Calendar of the disc in the buffer reads: 46 MB to 3,03 seconds = 15.16 MB / sec

    root@cPanel [/] # / sbin/hdparm-Tt/dev/sda


    / dev/sda:
    Timing cached reads: 30400 MB in 1.99 seconds = 15239,13 MB / s
    Calendar of the disc in the buffer reads: 102 MB in 3.02 seconds = 33,78 MB / s

    root@cPanel date [/] #.
    MIA © 8 jun 11:19:30 CEST 2011
    root@cPanel [/] # / sbin/hdparm-Tt/dev/sda

    / dev/sda:
    Timing cached reads: 28212 MB in 1.99 seconds = 14144,41 MB / s
    Calendar reads disc in the buffer: 16 MB to 3.06 seconds = 5.24 MB / sec

    In this case, are el momento mas carga del día.

    Con lentitud in los discos, siempre tengo a load of alto sin tener gran carga processes CPU y conexiones.

    Gracias por todo.

    Saludos.



    Hola willowmlg:

    Las Gráficas a pasarnos son las del disco duro performance, not the CPU.

    El modelo controladora lo puedes ver el doing click en host--> configuration--> storage adapters (haznos una captured todo lo that appears ahi)

    The cache aren't a SAN/NAS nivel, sino a solo disco controladora nivel. If no tienes hides the controladora en y los discos son el performance SATA will a ser muy pobre.

    A greeting.

  • Best way to access the 2 TB volumes +.

    Please tell me if you think this is the best approach.  I'm new on ESXi and I'm still learning how to take advantage of it...

    I have a server with 8 x 2 TB Seagate drives, a material in Areca 1230 raid controller and an additional Seagate hard disk of 500GB.

    I want to get the highest throughput and the best performance since records of 8 x 2 TB and the Areca raid controller, support RAID 5 and the ability to use disk space in ESXi.

    The system is currently configured as follows:

    -ESXi is installed on the hard disk of 500 GB; I would like to use a USB flash drive,

    but the motherboard does not support boot from a USB flash drive.

    -8 x 2 TB drives are assigned to a RAID, and within the entire raid, raid volumes 5-8 x 2 TB have been created.

    -Inside the ESXi, the first raid volume created a store of data and then I added the remaining volumes as extensions.

    -Gigabit ethernet (full duplex) - no jumbo frames

    -6 GB of RAM of the system with a processor core 2 duo

    Amount of space between the new data store has been assigned to one of the machines virtual and all tested well.  For example, I was able to copy a 2.4 GB file from one directory to another in 24 seconds.  This looks pretty good to me, since the specifications of disks show that they can maintain 95 MB/s of throughput.  I am trying to find how to install / use IOMeter to get more accurate test results.

    I created a SAMBA share and ran two tests.  Copy a file to the server from the network suffered between 50 Mbps and 54 Mbps.   Reading a file from the server also maintained between 50 Mbps and 54 Mbps.  This seems odd for two reasons.

    First, I tested the system with Ubuntu and SAMBA (no ESXi) and the files copied to the server at 66 MB/s.  Would imply it that ESXi General encurs that slows down the performance of the network by about 12 MB/s to 16 MB/s.  This seems a lot and gives me the impression that I did something wrong.

    Second, with ESXi installed, network throughput was between 50 Mbps and 54 Mbps all the time.  There was no initial explosion of high performance and throughput does not slow down over time.  This seems odd.  Wouldn't buffers / cache come into play?  For example, the Areca raid controller has 1gig of ram.  All least, shouldn't expect higher rates until the cache is full?  I've seen this problem on other NAS drives and I'm a little surprised that I'm not work in here.

    Thanks in advance for any thoughts, comments, or recommendations you have on any aspect of this - I should get this or comment on the raid controller, how it was configured, how better understand space in ESXi, or even advice on what kind of network throughput.

    For once I disagree with (sorry) DSTAVERT - given that all eight extensions are hosted on the same underlying RAID volume, there is no additional risk [/ b] in this configuration compared to a single set of 8 TB RAID-5 can be used without ESX.  If something goes wrong in this scenario, all eight extensions would be affected simultaneously.

    With respect to performance, this seems reasonable to me.  Keep in mind that for virtualization are generally is much more on random IOPS throughput / s sequential read (or write) block.  Eight batteries will help with that but disks SATA quickly bogged down unfortunately.

    You have to have one-Core 2 Duo, I would suggest running with 1 vCPU only guests.

    Please give points for any helpful answer.

  • increase the SGA in Oracle 10 g

    Hello world

    I'm Ann. I'm not a DBA, because the senior DBA and developers is on maternity leave must step up to dealing with reports and an Application of Oracle Forms and Oracle database (version 10.1).
    Recently, the complaints of the apps people is slow.
    So I open the Enterprise Manager and check the performance Advisor, it shows a question
    Find the buffer cache is too small, causing some additional important s in reading.
    Impact (minutes) 5.31
    Impact (%)
    13.13
    The recommendation is target SGA size increase by increasing the value of the parameter "sga_target" by Mr. 928 (there is a button next to the message implementation).
    The path of results is
    The buffer cache is too small, causing some additional important s in reading.
    13.13

    Wait for the class "user IO" consumed time important data.     16.27

    On the server, we use OpenSuse. I also connect to the server as root.
    Type the command free to get info on memory and here are the details
    Borg: ~ # free
    total used free shared buffers cached
    MEM: 6101512 6001288 100224 0 180376 4636256
    -/ + buffers/cache: 1184656 4916856
    Swap: 12586916 1728548 10858368


    I just want to know is is it safe to run the implement to increase the SGA size target.
    I know I need to stop and restart the instance.

    Thank you very much
    Ann

    >
    I just want to know is is it safe to run the implement to increase the SGA size target.
    >
    You shouldn't do anything until the

    1. you confirm that there is, indeed, a problem

    2. you determine exactly what is the problem

    3. determine the cause or causes of the problem

    4 identify. you and classify, possible solutions to the problem

    5. you test possible solutions in a text/dev environment to make sure that your process works as expected.

    6. you script a method to 'Cancel' the changes you make
    >
    Recently, the complaints of the apps people is slow.
    >
    So what! People always complain that the apps are slow. While it's good to take note of that do not have a reflex reaction to it.

    Gather more information.

    1 slow down what? data entry? running reports? batch?

    2. slow when? first thing in the morning? in the middle of the day when users are on the system? All the time?

    3 How long have you slow? It all started? A few days? a few weeks ago?

    4. that means the main DBA say about the slow reported? You have called to ask? Why not? They are on maternity leave not climbing the Alps or the Himalayas. Call them and ask them they were aware of the reported problems.

    5. have you asked the Chief EXECUTIVE what should you watch? Why not?

    Tell us more information. What are the parameters of memory now?

    Open a sql * plus the term can run and view the results of
    1. ENTER THE PARAMETER SGA_TARGET

    2 SEE THE SGA

    3. SPECIFY THE POOL PARAMETER

  • Oracle DB automatically the stopping and starting to

    Hello
    We have already installed Oracle 11 g DB. But three days ago Oracle automatically stop himself at 02:00 and then commissioning.
    After watching .log ${ORACLE_SID} alert_ I really don't understand why this is happening. I also watch the free memory space in our Linux machine by the command "-TM", which displays the following result:

    total used free shared buffers cached
    MEM: 11514 11431 82 0 198 7450
    -/ + buffers/cache: 3783 7730
    Swap: 14015 2874 11141
    Total: 25529 14306 11223

    But I'm not sure automatically shutdown and startup due to lack of memory.
    Is there perhaps another reason, what causes this problem?

    And the following list is copied from alert_$ {ORACLE_SID} .log:

    ///////////////////////////////////////////////////alert_${ORACLE_SID}.log//////////////////////////////////////
    Fri Aug 05 23:31:42 2011
    Thread 1 cannot allot of new newspapers, sequence 156
    Private stream flush is not complete
    Currently Journal # 2 seq # 155 mem # 0: /home/oracle/app/oracle/oradata/autsid/redo02.log
    Thread 1 Advanced to save the 156 (switch LGWR) sequence
    Currently Journal # 3 seq # 156 mem # 0: /home/oracle/app/oracle/oradata/autsid/redo03.log
    Sam Aug 06 00:46:08 2011
    Thread 1 cannot allot of new newspapers, sequence 157
    Private stream flush is not complete
    Currently Journal # 3 seq # 156 mem # 0: /home/oracle/app/oracle/oradata/autsid/redo03.log
    Thread 1 Advanced to record the sequence 157 (switch LGWR)
    Currently journal # 1, seq # 157 mem # 0: /home/oracle/app/oracle/oradata/autsid/redo01.log
    Resources Manager at compensation plan via the parameter
    Sat Aug 06 02:00:22 2011
    Closure of proceedings (immediate)
    OCMS background process stop
    Closure of proceedings: in addition to logons disabled
    Sat Aug 06 02:00:25 2011
    Stop background CJQ0 process
    Stop background QMNC process
    Stop background MMNL process
    MMON background process stop
    The high waters = 31 license
    All dispatchers/dispatchers and shared servers stop
    ALTER DATABASE CLOSE NORMAL
    Sat Aug 06 02:00:31 2011
    SMON: disabling recovery tx
    SMON: disabling of cache recovery
    Sat Aug 06 02:00:31 2011
    Stop process to archive
    Archiving is disabled
    Stopping the process Archive avoided: active 0
    Thread 1 is thrown to the sequence of journal 157
    Closing of redo thread 1
    Completed: ALTER DATABASE CLOSE NORMAL
    ALTER DATABASE TO REMOVE
    Completed: ALTER DATABASE DISMOUNT
    ARCH: Disabled archives due to the stop: 1089
    Stop process to archive
    Archiving is disabled
    Stopping the process Archive avoided: active 0
    Sat Aug 06 02:00:32 2011
    To stop background VKTM process:
    ARCH: Disabled archives due to the stop: 1089
    Stop process to archive
    Archiving is disabled
    Stopping the process Archive avoided: active 0
    Sat Aug 06 02:00:35 2011
    Instance shutdown complete
    Sat Aug 06 02:00:36 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    SNA system picked latch-free 3
    With the help of LOG_ARCHIVE_DEST_1 parameter value by default as USE_DB_RECOVERY_FILE_DEST
    Autotune undo retention is enabled.
    IMODE = BR
    ILAT = 51
    LICENSE_MAX_USERS = 0
    SYS audit is disabled
    Commissioning:
    Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
    With the options of partitioning, OLAP, Data Mining and Real Application Testing.
    Using the settings in /home/oracle/oraBase/product/11.2.0/dbhome_1/dbs/spfileautsid.ora side Server spfile
    Parameters of the system with default values:
    process = 300
    sessions = 472
    memory_target = 5696M
    control_files = "/ home/oracle/app/oracle/oradata/autsid/control01.ctl".
    control_files = "/ home/oracle/oraBase/recovery_area/autsid/control02.ctl".
    DB_BLOCK_SIZE = 8192
    compatible = "11.2.0.0.0."
    db_recovery_file_dest = ' / home/oracle/oraBase/recovery_area.
    db_recovery_file_dest_size = M 3882
    undo_tablespace = 'UNDOTBS1.
    Remote_login_passwordfile = "EXCLUSIVE."
    db_domain = «»
    dispatchers = "(PROTOCOL=TCP) (SERVICE = autsidXDB)" "
    SHARED_SERVERS = 5
    audit_file_dest = "/ home/oracle/oraBase/admin/autsid/adump.
    AUDIT_TRAIL = 'DB '.
    db_name = "autsid".
    open_cursors = 300
    diagnostic_dest = "/ home/oracle/oraBase.
    Sat Aug 06 02:00:38 2011
    PMON started with pid = 2, OS id = 13030
    Sat Aug 06 02:00:39 2011
    VKTM started with pid = 3, OS id = 13032 high priority
    VKTM clocked at (10) precision of milliseconds with DBRM quantum (100) ms
    Sat Aug 06 02:00:39 2011
    GEN0 started with pid = 4, OS id = 13036
    Sat Aug 06 02:00:39 2011
    DIAG started with pid = 5, OS id = 13038
    Sat Aug 06 02:00:39 2011
    DBRM started with pid = 6, OS id = 13040
    Sat Aug 06 02:00:39 2011
    PSP0 started with pid = 7, OS id = 13042
    Sat Aug 06 02:00:39 2011
    DIA0 started with pid = 8, OS id = 13044
    Sat Aug 06 02:00:39 2011
    MA started with pid = 9, OS id = 13046
    Sat Aug 06 02:00:39 2011
    DBW0 started with pid = 10, OS id = 13048
    Sat Aug 06 02:00:39 2011
    LGWR started with pid = 11, OS id = 13050
    Sat Aug 06 02:00:39 2011
    CKPT started with pid = 12, OS id = 13052
    Sat Aug 06 02:00:39 2011
    SMON started with pid = 13, OS id = 13054
    Sat Aug 06 02:00:39 2011
    RECCE has started with pid = 14, OS id = 13056
    Sat Aug 06 02:00:39 2011
    MMON started with pid = 15, OS id = 13058
    ///////////////////////////////////////////////////alert_${ORACLE_SID}.log//////////////////////////////////////

    Khosro.

    Published by: user1045485 on August 6, 2011 03:11

    >

    Sat Aug 06 02:00:22 2011
    Closure of proceedings (immediate)
    OCMS background process stop

    >

    SQL > select COMMAND_ID of V$ RMAN_BACKUP_JOB_DETAILS;

    COMMAND_ID
    ---------------------------------
    2011-08 - 06T 02: 00:45
    2011-08 - 04T 02: 00:46
    2011 T 02 07-29: 00:50
    2011-08 - 05T 02: 00:47

    I think that, in my view, there are four lines, what I understand from COMMAND_ID is that they start at 02:00 in the course of three days, I reason?

    Yes, right, I would further investigate those RMAN backups, it seems that someone is making a cold RMAN backup or something similar...

  • Redo log buffer question

    Hi master,

    This seems to be very basic, but I would like to know internal process.

    We know all that LGWR writes redo entries to redo logs on the disk online. on validation SCN is generated and the tag to the transaction. and LGWR writes this to online redo log files.

    but my question is, how these redo entries just redo log buffer? Look at all the necessary data are read from the cache of the server process buffers. It is modified it and committed. DBWR wrote this in files of data, but at what time, what process writes this committed transaction (I think again entry) in the log buffers cache?

    LGWR do that? What exactly happens internally?

    If you can please focus you some light on internals, I will be grateful...


    Thanks and greetings
    VD

    Vikrant,
    I will write less coz used pda. In general, this happens
    1. a calculation because how much space is required in the log buffer.
    2 server process acquires redo copy latch to mention some reco will be responsible.

    Redo allocation latch is used to allocate space.

    Redo allocation latch is provided after the space is released.

    Redo copy latch is used copy redo contained in the log buffer.

    Redo copy lock is released

    HTH
    Aman

  • HP Pavilion dv7-3165dx

    Hello

    I have a HP Pavilion dv7-3165dx that no longer starts.  The screen gives me a SMART hard disk error saying: "SMART hard drive control has detected an imminent failure.  To ensure no data loss, please save the content immediately and run the system diagnostics hard drive Test", what should I do?  Help, I relly need information on my computer.

    Hello

    Given the error at startup, it would look like the HARD drive is a failure that would be consistent with the inability to repair or reload the operating system.

    First, data on the disk, if it is vital to get it back, your best option would be to seek the services of a company that specializes in data retreival, but this can be quite expensive.

    Another couple of options, you can try is the following, however it is important to note that the odds of retreival depend on the State of the HARD drive.

    1. one possible way to try to recover your files from a disk non-boot is to follow the process described in the link below.  The CD of the Ubuntu operating system you create can launched from the CD alone (IE it doesn't have to be installed on the hard drive) and I've often found to successfully extract data even a hard failure.  When you created the CD, follow the instructions and see if you can save your existing files. 

    http://www.howtogeek.com/HOWTO/Windows-Vista/use-Ubuntu-Live-CD-to-backup-files-from-your-dead-Windo...

    2 another option would be to remove the drive HARD, place it in an external enclosure - is an example of the link below - connect to another PC and see if you can access your files.

    External 2.5 HARD drive caddy ".

    ***************************************************************************

    Replace the HARD drive, with respect to the readers on the links below are example that would be perfect for your laptop.

    500 GB hard drive for laptop

    750 GB hard drive for laptop.

    The procedure to replace the hard drive begins on Page 64 of your & Maintenance Guide.

    Once this is done, simply use your recovery DVD to reinstall the operating system on the new drive.

    If you do not have your recovery discs, you can order a replacement set using the link below.

    Order HP recovery disks.

    If you have a problem with this link, order them directly from HP.

    If you live in the United States, contact HP here.

    If you are in another part of the world, begin here.

    Kind regards

    DP - K

  • Arrays, and Dataproviders

    Right then my app made things difficult to enter data in a drop-down list item and I'm stuck so I would ask kindly for support.

    The code that gets the info is:

    var stLinkUrls:URLVariables = new URLVariables(evt.target.data);
    dpp = stLinkUrls.stbbapi_links.split(",");
    

    Code that is further down in the code that displays the drop-down list is the following:

    var dp:DropDown= new DropDown();
    dp.rowHeight = 45;
    dp.rowCount=3;
    dp.width = 250;
    for(var i in dpp){
        dpp.push({label:"http://tncr.ws/"+i.toString()});
    }
    dp.dataProvider = new DataProvider(dpp);
    addChild(dp);
    

    The variables url gets its info from my server from a php page which has 2 variables "URL" you can see an example of the output of my php page if you like here: http://tncr.ws/example_beta5.php?apiv=110&key=23f452fae5943d0b5e059b3e5a71e097&s=http: / / www.nickdodd...

    Hey nick,

    Alrite so I got it to work. a few things that you need to change in your output from the Web server. to separate URLVariables, you will have to insert '&' between the two, a bit like a query string in your url in the browser bar. second thing you want to do is instead of use "\n" to separate your link ID and real URL, use the bar rather ' |. ' (just above the Enter key on most keyboards - might have to hit the shift). Here is a link to a page with the correct URLVariable string that you should be using my suggestions:

    http://www.rabcore.com/workshop/PlayBook/variableTest.php

    so after getting to look like that, to analyze you have to take a few steps. first of all, you need to obtain raw data first and then split with commas like you did. then you take that and further break down to separate your URL and link ID's then you add to the dataprovider and implemented. Here's some code that shows you how to do exactly that:

    URLLoaderTest.as:

    package{    import flash.display.Sprite;    import flash.display.StageAlign;    import flash.display.StageScaleMode;    import flash.events.Event;    import flash.net.URLLoader;    import flash.net.URLRequest;    import flash.net.URLVariables;
    
        import qnx.ui.data.DataProvider;    import qnx.ui.listClasses.DropDown;
    
        [SWF(width="1024", height="600", backgroundColor="#CCCCCC", frameRate="30")]    public class URLLoaderTest extends Sprite    {        /**         * make all variables accessible throughout the          * application         *          */
    
            private var loader:URLLoader;        private var request:URLRequest;
    
            private var dropDown:DropDown;        private var dpp:Array;        private var dataProvider:DataProvider;
    
            public function URLLoaderTest()        {            super();
    
                // support autoOrients            stage.align = StageAlign.TOP_LEFT;            stage.scaleMode = StageScaleMode.NO_SCALE;
    
                /**             *              * Use the loader class to load your URL with the correct              * URLVariable string             *              */
    
                request = new URLRequest("http://www.rabcore.com/workshop/playbook/variableTest.php");
    
                loader = new URLLoader();            loader.load(request);            loader.addEventListener(Event.COMPLETE, handleData);
    
            }        private function handleData(e:Event):void        {            var stLinkUrls:URLVariables = new URLVariables(e.target.data);
    
                /**             *              * First round of parsing the data retreived from the server             *              */
    
                var stLinkUrls_parsed:Array = stLinkUrls.stbbapi_links.split(",");
    
                /**             * Now that have an array of URL with their link ID's             * we pass it to the to the setupDropDown() method             *              */
    
                setupDropDown(stLinkUrls_parsed);
    
            }
    
            private function setupDropDown(arr:Array):void        {            dpp = new Array();
    
                /**             * Here is where the money is. you iterate through your             * array of links.             */
    
                for each (var linkData:String in arr)            {                /**                 *  now remember that they are in the [ID]|[link] format.                 *  so we have to further seperate it to keep the ID and                  *  link seperate. we then store it in a new object that                  *  we can just push into the drop down array.                 *                  */                var linkDataObj:Object = new Object();
    
                    /** split the links at the "|" */                var linkDataArray:Array = linkData.split("|");
    
                    /**                 * since the ID is first and the URL is second in the split,                 * we can just append http://tncr.ws/ to the beginning                  * and save the actual url to the "url" portion of the                  * object we are adding                 *                  */
    
                    linkDataObj.label = "http://tncr.ws/" + linkDataArray[0];                linkDataObj.url = linkDataArray[1];
    
                    /** push it into the drop down array */
    
                    dpp.push(linkDataObj);            }
    
                /** now we just set the drop down up */
    
                dataProvider = new DataProvider(dpp);
    
                dropDown = new DropDown();            dropDown.dataProvider = dataProvider;            dropDown.width = 400;            dropDown.rowCount = 5;            dropDown.setPosition(10,10);
    
                addChild(dropDown);
    
            }    }}
    

    I left comments throughout that should walk you through it. hope that sheds some light on the issue. Good luck!

    EDIT: corrected the code to reflect your original display code.

  • HP G62 freezing after startup

    Hello

    I have a G62 which keeps freezing minutes after the start. After reading the forums, I ran the HP SMART Check and on the Test drive , I got the following response

    Smart Check: spent

    DST short: failure

    Failure ID: Q4G5W2-5AR71F-9XL03F-60R003

    Product ID: XF332EA ~ ABU

    HARD drive

    I have Windows 7 64-bit o/s

    Can someone please help

    Hi George,.

    You will need to use your recovery DVD to reinstall the operating system to a new disk - this is described in the document of HP here.

    If you do not have your recovery discs, you can order a replacement set.  Looking at the model number of the laptop, I guess you live in the United Kingdom - if that is correct, you will need to use the 3rd party on the link below.

    HP recovery disks.

    For the existing personal data, you can have the existing drive, your best chance of data recovery would be to use the services of a company that specializes in data retreival, but it is a fairly expensive to borrow road.

    Another option, you can try (and much cheaper) would be to remove the HARD disk space, in an external enclosure - an example of what you need is on the link below - connect to another PC and see if you can access your files.

    2.5 "HDD external Caddy.

    Best regards

    DP - K

  • old age and the primary key index

    That question lingers on me for a long time. When I have a table(non-partitioned) with A column as a primary key, it makes no difference (in data retreival) if I create an index on column A. The oracle that I use is 10.2.0.3. My conclusion was pk already stores records in a sorted order and an index is not necessary.

    I also have a question on the index in the global temporary tables. A global temporary table was created initially with his index finger. Then, it is used with each session. As the data in a global temporary table are related to the session, oracle create index separated for each session?

    Hello

    When you create an on a column in a table's primary key, a unique index is created automatically, you need not create separate.

    Oracle Concepts:

    You can create indexes for temporary tables using the CREATE INDEX statement. Indexes created in the temporary tables are also temporary, and the data in the index have the same scope of session or transaction data in the temporary table.

    You can create views that access the temporary and permanent tables. You can also create triggers on temporary tables.

    For more information, see [temporary Tables | http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref769]

    Kind regards

    Published by: Walter Fernández on February 7, 2009 22:09 - Add URL...

  • Memory and the use of the disc on my IDS 4235 sensor &amp; 4250.

    My ID sensor memory usage shows a use of 99%, and the hard drive is already 5 of the 15 Gig. Here is the log of "seeing the worm."

    With the help of 398913536 of 1980493824 memory available bytes (99% of use)

    With the help of 5 of the 15 bytes of disk space available (66% of use)

    -only the signature of med and high seriousness is enabled. Why the sensor used this memory?

    -Is this the sensor has IDS to a database that stores the logs which causes the hard drive used space? (considering that she has the management of the IDM)

    - Or any other reason why the hard drive used whereas the large drive space is new and operating time is 2 months?

    -Update of the signature file is adults who took over this large space on the HARD drive?

    Hope - could someone give me an idea why is it so.

    As I said earlier, there is not a problem with the use of disk space. Memory usage bug is fixed in the 5.X product not 4.X. However, there are some good bug fixes in the patch of engineering 4.1(4g).

    The number of real memory usage can be determined from the service account by entering the following command:

    Bash-2, $05 free

    total used free shared buffers cached

    MEM: 1934076 1424896 509180 0 18284 1214536

    -/ + buffers/cache: 192076 1742000

    Swap: 522072 0 522072

    The "Mem:" line and the column 'pre-owned' is the amount of memory (in kilobytes) that

    the command reports "show version". However, this total includes the

    amount 'caching '.

    So in the example above, the actual memory used is (1424896-1214536), or

    210360 KB. It is (210360 / 1934076 * 100), or 10.9% of total memory.

  • ORA-27102: out of memory

    Hi all

    11 GR 2

    RHEL6.5

    I'm from our TEST database, but I got error

    $ sqlplus / as sysdba

    SQL * more: Production of the 11.2.0.4.0 version on MON Feb 1 09:21:46 2016

    Copyright (c) 1982, 2013, Oracle.  All rights reserved.

    Connect to an instance is idle.

    SQL > startup

    ORA-27102: out of memory

    Linux-x86_64 error: 28: no space is available on the device

    Additional information:-33554432

    Additional information: 1

    What is the memory below that I can attribute to my LMS avoid this error?

    # free-m

    total used free shared buffers cached

    MEM: 14882 9679 5202 1012 29 1759

    -/ + buffers/cache: 7890 6991

    Swap: 20488 8881 11607

    I also Thomas set shmall in /etc/sysctl.conf

    # Control the maximum number of shared memory pages segments

    kernel.shmall = 4294967296

    Thank you very much

    JC

    shmall is the total amount of shared memory, in the pages, that the system can use simultaneously.

    Define shmall equal to the sum of all Sams in the system, divided by the size of the page.

    The page size can be determined by using the following command:

    $ getconf PAGE_SIZE

    4096

    For example, if the sum of all Sams in the system is 16 GB and the result of

    "$ getconf PAGE_SIZE" is 4096 (4 KB) then set shmall 4194304 pages

    As the root user affect the 4194304 shmall in /etc/sysctl.conf file:

    kernel.shmall = 4194304

    then run the following command:

    $ sysctl Pei

    $ cat/proc/sys/kernel/shmall

    4194304

    The above command loads the new value and an restart is not required.

    Return to the oracle user and try again the start command.

Maybe you are looking for

  • Error Code: 0x8007045A

    Error code: 0x8007045A?Regds, 'a journey of a thousand miles begins with a first step."

  • Why can't my computer never read or detect a usb device?

    All the time I connect a USB, my computer makes no sound or pop up anywhere. It's as if something has never connected to my computer. I want to know what the problem is. As soon as possible please and thank you.

  • BlackBerry Blackberry ID Z10

    How can I reset my Blackberry ID when I don't remember my password and the email ID address is not valid. I can't get the email to change the password or reset the device?

  • Is there a windows messaging availble live 10?

    Is there a windows messaging availble live 10? OT: Windows

  • How to convert the non - CBD CBD

    HelloI recently updated my 11.2.0.2 test to 12.1.0.2 database. I did it using the manual method using perl script. I'm trying to convert the non - CBD CBD. But it fails with the following error.Can you please guide me where I am wrong? Connected to: