Shared pool larger than the buffer cache

Hi all

My database is 10.2.0.4 running linux platform

No .of 2 cpu, RAM-2 GB

SGA_TARGET wa set to 1 GB.

Initially the memory have been configured as a shared pool around 300 MB and the buffer cache about 600 MB.

When I questioned the v$ sga_resize_ops views I found some interesting results.

Many operations and growth reduction were happened and the current size of the shared pool is about 600 MB and buffer cache is 300 MB. (this happened during last 1)

I guess that the buffer cache must always be larger than the size compared to a shared pool. My assumption is right?

Is it because of sql code using the do not bind variables resulting in growth shared pool? No relaods and radiation are almost ignored I think it should not be the case.

Also no lock events listd in the top5

I've also seen the 15% of the shared pool is marked as being of kGH:NO ACCESS, which means that the part is used for the cache buffers.

Should I set the lower limit for the shared pool and the buffer cache or can I just ignore it.

Thank you
rajdhanvi

You change your question now... your question was that he has sharedpool large size > buffer cache is acceptable... . Check your own second post... for your new question now is why pool continues to increase and partly used as buffer cache... the proof is given by tanel poder y what happens when EAMA is used... For the Kingston general hospital: NO ACCESS means that no one else could touch...

Concerning
Karan

Tags: Database

Similar Questions

  • Updated data are larger than the buffer cache

    Hi Experts,

    I have a small request. I have a table called CONTENT to have 12 GB of data. Here, I pulled a single update statement that updates to 8 GB of CONTENTS table of data using 1 GB of database buffer cache?

    How 1 GB of the Database Buffer Cache will be used to update the 8 GB of data? Level architectural any additional changes will happen (than usual) when executing "Updated data is larger than the buffer cache"?

    Could someone of you please response. Thank you

    Database: 10.2.0.5

    OS: Power 5-64 bit AIX system

    Hello

    the basic mechanism is the following:

    needed to update data blocks are read from the data files and cached in memory (buffer cache), the update is made to the buffer cache and the front of the (UNDO) image is stored in the segments of cancellation, operation (update here) is re-encoded to redo buffer until it goes again files If the buffer is samll or we need more space in the buffer cache, or we have a control point or... Oracle writes back the block modified data files to free the memory buffer for the more blocks.

    While the other runs the update of transactions can read before you change the image of CANCEL if validation, at the end of the transaction done change is confirmed and validation is recorded in the redo. If the cancellation is made at the end of the transaction before the image is "restored" and rollback is saved in do it again.

    Concerning

  • Read data larger than the DB buffer Cache

    DB version: 10.2.0.4
    OS: Solarit 5.10


    We have a DB with 1 GB for DB_CACHE_SIZE. Automatic shared memory management is disabled (SGA_TARGET = 0).

    If a query is triggered on a table that will grab the 2 GB of data. Hang in this session? How oracle handles this?

    Tom wrote:
    If the recovered blocks get automatically removed from the buffer cache once it is retrieved by the LRU algorithm, then Oracle must handle this without any problem. Right?

    Yes. No problem in that the "+ a fetch size +" (for example, by selecting 2 GB with a value of lines) need to fit completely in the db (only 1 GB in size) buffer cache.

    As mentioned Sybrand - everything in this case is emptied as blocks of data more recent will be read... and that emptied shortly after thereafter as of the even more recent data blocks are read.

    The ratio / access to the cache will be low.

    But this will not cause Oracle errors or problems - simply that degrade performance as volumes of data being processed exceeds the capacity of the cache.

    It's like running a very broad program that requires more RAM which is available on a PC. The 'additional RAM' is the file on the disk. The APA will be slow because its memory pages (some disk) must be swapped in and out of memory as needed. It will work faster if the PC has an enough RAM. However, the o/s is designed to address this exact situation that requires more RAM than physically available.

    Similar situation with the treatment of large chunks of data than the buffer cache has a capacity of.

  • Question Basic setting, ask questions about the buffer cache

    Database: Oracle 10g
    Host: Sun Solaris, 16 CPU server



    I look at the behavior of some simple queries that I start the tuning of our data warehouse.

    Using SQL * more and AUTOTRACE, I ran this query two times in a row

    SELECT *.
    OF PROCEDURE_FACT
    WHERE PROC_FACT_ID BETWEEN 100000 AND 200000

    He finds the index on PROC_FACT_ID and conducted an analysis of the range of indexes to access the data in the table by rowid. The first time, that it ran, there are about 600 physical block reads as data in the table were not in the buffer cache. The second time, he had 0 physical block reads, because they were all in the cache. All this was expected behavior.

    So I ran this query twice now,

    SELECT DATA_SOURCE_CD, COUNT (*)
    OF PROCEDURE_FACT
    DATA_SOURCE_CD GROUP

    As expected, he made a full table scan, because there is no index on DATA_SOURCE_CD and then chopped the results to find the different DATA_SOURCE_CD values. The first run had these results

    compatible gets 190496
    physical reads 169696

    The second run had these results

    compatible gets 190496
    physical reads 170248


    NOT what I expected. I would have thought that the second run would find many of the blocks already in the cache of the pads of the first execution, so that the number of physical reads would drop significantly.

    Any help to understand this would be greatly appreciated.

    And is there something that can be done to keep the table PROCEDURE_FACT (the central table of our star schema) "pinned" in the buffer cache?

    Thanks in advance.

    -chris Curzon

    Christopher Curzon wrote:
    Your comment about the buffer cache used for smaller objects that benefit is something that I asked about a good deal. It sounds as if tuning the buffer cache will have little impact on queries that scan of entire tables.

    Chris,

    If you can afford it and you think it is a reasonable approach with regard to the remaining segments that are supposed to benefit the buffer cache, you can always consider your segment of table with 'CACHE' that will change the behavior on the full of a broad sector table scan (Oracle treats small and large segments differently during the execution of table scans complete regarding the cache of) marking stamps, you can override this treatment by using the CACHE. NOCACHE keyword) or move your table of facts to a DUNGEON hen establishing a (ALTER SYSTEM SET DB_KEEP_CACHE_SIZE = ), modify the segments (ALTER TABLE... STORAGE (USER_TABLES KEEP)) accordingly and perform a full table scan to load blocks in the cache of the DUNGEON.

    Note that the disadvantage of the approach of the KEEP pool is that you have less memory available for the default buffer cache (unless you add more memory on your system). When an object to mark as being cached is always is in competition with other objects in the cache buffers by default, so it could still be aged out (the same applies to the pool of DUNGEON, if the segment is too large or too many segments are allocated age blocks out as well).

    So my question: How can I get for a parallel analysis on queries that use a table scan complete such as what I posted in my previous email? It is a question of the provision of the "parallel" indicator, or is it an init.ora parameter I should try?

    You can use a PARALLEL hint in your statement:

    SELECT /*+ PARALLEL(PROCEDURE_FACT) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    or you could mark an object as PARALLEL in the dictionary:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL;
    

    Note that since you have 16 processors (or 16 cores that resemble Oracle 32? Check the CPU_COUNT setting) the default parallel degree would be usually 2 times 16 = 32, which means that Oracle generates at least 32 parallel slaves for a parallel operation (it could be another set of 32 slaves if the operation for example include a GROUP BY operation) If you do not use the PARALLEL_ADAPTIVE_MULTI_USER parameter (which allows to reduce the parallelism if several parallel operations running concurrently).

    I recommend to choose a lesser degree parallel to your default value of 32 because usually you gain much by such a degree, then you can get the same performance when you use lower a setting like this:

    SELECT /*+ PARALLEL(PROCEDURE_FACT, 4) */ DATA_SOURCE_CD, COUNT(*)
    FROM PROCEDURE_FACT
    GROUP BY DATA_SOURCE_CD;
    

    The same could be applied to the paralleling of the object:

    ALTER MATERIALIZED VIEW PROCEDURE_FACT PARALLEL 4;
    

    Note When defining the object of many operations in PARALLEL will be parallelisee (DML even can be run in parallel, if you enable dml parallel, which has some special restrictions), so I recommend to use it with caution and begin with an explicit indication in those statements where you know that it will be useful to do.

    Also check that your PARALLEL_MAX_SERVERS is high enough when you use parallel operations, which should be the case in your version of Oracle.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Why the blocks of temporary tables are placed in the buffer cache?

    I read the following statement, which seems quite plausible to me: "Oracle7.3 and generates from close db file sequential reading of the events when a dedicated server process reads data from temporary segment of the disc." Older versions of Oracle would read temporary segment data in the database buffer cache using db file scattered reads. Releases latest exploit heuristics that data of temporary segment is not likely to be shareable or revisited, then reads it directly to a server process programs global (PGA). »

    To verify this statement (and also for the pleasure of seeing one of these rare close db file sequential read events), I ran a little experiment on my Oracle 10.2 Linux (see below). Not only it seems that different this v above, the blocks of temporary tables are placed in the buffer cache, but also$ BH. OBJD for these blocks does not refer to an object in the database's existing (at least not one that is listed in DBA_OBJECTS). Either incidentally, I traced the session and have not seen any file db close sequential read events.

    So, I have the following questions:
    (1) is my experimental set-up and correct my conclusions (i.e. are blocks of temporary tables really placed in the buffer cache)?
    (2) if so, what is the reason for placing blocks of temporary tables in the buffer cache? As these blocks contain private session data, the blocks in the buffer cache can be reused by another session. So why do all cache buffer management fees to the blocks in the buffer cache (and possibly remove) rather than their caching in a private in-memory session?
    (3) what V$ BH. OBJD consult for blocks belonging to temporary tables?

    Thanks for any help and information
    Kind regards
    Martin

    Experience I ran (on 10.2 /Linux)
    =============================
    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Oct 24 22:25:07 2010
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    SQL> create global temporary table temp_tab_4 on commit preserve rows as select * from dba_objects;
    
    Table created.
    
    SQL> alter system flush buffer_cache;
    
    System altered.
    
    SQL> select count(*), status from v$bh group by status order by 1 desc;
    
      COUNT(*) STATUS
    ---------- -------
          4208 free
          3 xcur
    
    SQL> select count(*) from temp_tab_4;
    
      COUNT(*)
    ----------
         11417
    
    SQL> -- NOW THE BUFFER CACHE CONTAINS USED BLOCKS, THERE WAS NO OTHER ACTIVITY ON THE DATABASE
    select count(*), status from v$bh group by status order by 1 desc;
    SQL> 
      COUNT(*) STATUS
    ---------- -------
          4060 free
           151 xcur
    
    SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD HAVE BLOCK# THAT CORRESPOND TO THE TEMPORARY SEGMENT DISPLAYED
    -- IN V$TEMPSEG_USAGE
    select count(*), status, objd from v$bh where status != 'free' group by status, objd order by 1 desc;
    SQL> SQL> 
      COUNT(*) STATUS      OBJD
    ---------- ------- ----------
           145 xcur       4220937
          2 xcur        257
          2 xcur        237
          1 xcur        239
          1 xcur    4294967295
    
    SQL> -- THE OBJECT REFERENCED BY THE NEWLY USED BLOCKS IS NOT LISTED IN DBA_OBJECTS
    select * from dba_objects where object_id = 4220937 or data_object_id = 4220937;
    
    SQL> 
    no rows selected
    
    SQL> SQL> -- THE BLOCKS WITH THE "STRANGE" OBJD ARE MARKED AS TEMP IN V$BH
    select distinct temp from v$bh where objd = 4220937;
    SQL> 
    T
    -
    Y
    
    SQL> 
    Edited by: user4530562 the 25.10.2010 01:12

    Edited by: user4530562 the 25.10.2010 04:57

    The reason to put the blocks to the global temporary table in the buffer cache is the same thing why you put ordinary table blocks in the cache buffers-> you want some of them to be in memory.

    If you ask why don't keep us somehow temporary tables in the PGA - well what happens if this temporary table will be 50 GB? 32-bit platforms cannot even handle this and you do not want a process of becoming uncontrollable so great.

    Moreover, TWG will allow you to restore, back to a backup (or savepoint implied when an error occurs during a call DML), and this requires protection by the cancellation. Place lines / revenge in PGA would have complicated the implementation even further... now GTT is almost of the regular tables which just happened to reside in temporary files.

    If you really want to put data in the PGA only, then you can create collections of PL/SQL and even access through the use of SQL (coll CAST AS xyz_type) where xyz_type is a TABLE of an object any.

    --
    Tanel Poder
    New online seminars!
    http://tech.e2sn.com/Oracle-training-seminars

  • Now, the screen moves as the surface is larger than the actual size of the screen. I checked its display options, but how do I disable this screen oversized?

    Hello

    My mother was a member of the family to try to help inrease size but Bobo and now its screen in any program or simply from bottom of screen size is larger than the size of the screen.  She can move the mouse to the left to bring the extreme left veiw and then the same thing right but never have all the screen in veiw now.  How to restore to normal for her.  I tried control - display, but not... help her please help me

    Hi M Carmel,

    Thanks for posting in the Microsoft community.

    I understand that you are facing the issue with the resolution of the screen display.

    I suggest you to see the links and check.

    Change the resolution of your monitor

    http://Windows.Microsoft.com/en-us/Windows-XP/help/Setup/change-monitor-resolution

    Change your screen resolution

    http://Windows.Microsoft.com/en-us/Windows-XP/help/change-screen-resolution

    Please follow these recommended steps, review the additional information provided and after back if you still experience the issue. I will be happy to provide you with additional options available that you can use to get this resolved.

  • Make the wallpaper slightly larger than the screen

    Hello

    I work with the simulator of the storm. I know that the screen size is fixed (obviously; "I think it's 320 x 480), but I want to make a slightly larger background image and allow scrolling horizontally and vertically.

    In other words, I want to be able to make a pan larger than the image of the screen; An image that has limits.

    I say, "which has limits," because I tried this, but the image is tiled, instead of being a single. A TI seems constantly to the left and down. Any help please.

    USE_ALL_HEIGHT and USE_ALL_WIDTH are a problem, as I predicted. Since you give your MyScreen VERTICAL_SCROLL and HORIZONTAL_SCROLL, it gives her children a height and a width of Integer.MAX_VALUE > 1. USE_ALL_HEIGHT and USE_ALL_WIDTH do all that great width and height.

    In order to make the screen exactly the size you want to substitute sublayout and take a look at net.rim.device.api.ui.Manager setVirtualExtent and setExtent methods. Some of the best examples of the override of this method are found in the code of

    Implement advanced buttons, fields and managers

    This code can be considered a bible of the BlackBerry user interface programming (Finally, at least for non touchscreen devices).

  • Videos larger than the window display.

    I recently started having a problem with my g6 Pavilion by watching videos.  I noticed first on YouTube, but also can't play .mp4 files in Media Player.  I see that a small part of the video, it is larger than the window full screen, or display in this mode.

    After making sure the flash drive was up to date and quicktime (was grabbing at straws) I found that AMD video Steady was the issue.  Disabled and viola, playing the videos properly.

    Thanks for the suggestions anyway.

  • Error: The file size for all drives of paging can be a little larger than the size you specified, when you open windows in Windows 7

    Original title: paging notice when open windows on mac, divide the hard disk

    I have a split hardrive on a MAC and windows, and quickbooks installed on this drive.  When I open windows, I get a notice of paging file.  -size of the paging file.  "The file size for all drives of paging can be a little larger than the specified size.  IM using Bootcamp for the windows side.  It is to show the memory almost full and I did not yet get many date in QB.  Don't know what I have to do.

    Hello

    1. have you done any software or changes to the material on the computer before this problem?

    2. it worked correctly previously?

    3. how many hard drives are in the computer?

    4. is the problem with a partition or a particular hard drive?

    Try the steps from the link.

    Change the size of virtual memory

    http://Windows.Microsoft.com/en-us/Windows7/change-the-size-of-virtual-memory

    What is virtual memory?
    http://Windows.Microsoft.com/en-us/Windows7/what-is-virtual-memory

    Hope this information is useful.

  • First paragraph of the text is larger than the following paragraphs

    I have a page with a lot of text on the subject. For some reason, the first paragraph seems larger than the following paragraphs. But all that I checked shows that they are identical in terms of specs. I copied/pasted/no formatted/spruced up... but no matter what I do, that first paragraph is looking to be bigger! Is this some secret parameter that I don't know?

    I took screenshots and they are indeed the same size. But that first paragraph seems just different! Why would it be?

    I have attached a screenshot. Let me know if my client and I are just seeing things.

    Julie

    example.jpg

    Seems to be the same size.

    See the image below, the first line of paragraph 2, overlapping the first paragraph. (o during even as o in the company)

    Can be an illusion of perspective due to all caps in the first paragraph.

  • < Unspecified file name > file is larger than the maximum size supported by datastore '< indeterminate datastore >.

    I know that this issue has much spoken in the forums, but the answers are always to make sure that your block sizes are set to 8 MB - mine are already. Let me explain:

    I have a virtual machine with a large amount of connected storage - something along the lines of discs 10 x 1.99 to. Sit all VMDK on partitions of the VMFS of 8 MB of size block, including the configuration of the VM (location of the pagefile).

    Every time I try and snapshot of the virtual machine, I see the "< unspecified file name > file is larger than the maximum size supported by the data store ' < unspecified datastore >. All other virtual machines instant fine, but any other VM has a similar amount of storage as the VM problem.

    I have now moved the configuration files of the virtual machine to a new partition VMFS 5 of 1.91 TB, but the instant error persists. Most of the readers is sitting on VMFS 3.33 or 3.46. It will take me a while to move all VMFS 5 to see if that solves the problem.

    VMware.log for VM reports:

    2011-10-09T09:55:55.328Z| vcpu-0|  DiskLibCreateCustom: Unsupported disk capacity or disk capacity (2186928627712 bytes) is too large for vmfs file size.
    2011-10-09T09:55:55.328Z| vcpu-0| DISKLIB-LIB   : Failed to create link: The destination file system does not support large files (12)
    2011-10-09T09:55:55.328Z| vcpu-0| SNAPSHOT: SnapshotBranchDisk: Failed to branch disk: '/vmfs/volumes/4dc30ba3-b13c5026-92d8-d485643a1de4/spoon-app/spoon-app_2.vmdk' -> '/vmfs/volumes/4dc30ba3-b13c5026-92d8-d485643a1de4/spoon-app/spoon-app_2-000001.vmdk' : The destination file system does not support large files (12)
    
    

    My VMDK and volumes are smaller than 2032GB. I don't understand why, it's be a problem.

    Anyone have any ideas?

    Although ESXi 5 supports larger LUN as a raw physical devices (up to 64 TB), the maximum size of a virtual disk has not yet changed.

    André

  • VMware Workstation is unable to open one of the virtual disks required by this virtual machine because it is larger than the maximum file size supported by the host file system.

    I'm transitioning from Windows XP (64-bit) on a new computer with Windows 7 installed.  I undertook to VMWare Workstation 6 7.0.1 in the process.  The problem is with a very large RAID 5 volume on the new system.  It's a score of 5.45 TB (TPG and NTFS).

    When I try to start a virtual machine on this volume, I get the message "VMware Workstation cannot open one of the virtual disks required by this virtual machine because it is larger than the maximum size of the files supported by the host file system.".  This is obviously not true.  If I copy the virtual machine to another drive on the system, it starts without a problem.

    Anyone have any ideas on how to work around this problem?

    try to add this line in the vmx file:

    diskLib.sparseMaxFileSizeCheck = "false".

    Maybe it helps.

    VMDK-type do you use?

    monolithicSparse? -C' is the only part type more?

    ___________________________________

    VMX-settings- Workstation FAQ -[MOA-liveCD | http://sanbarrow.com/moa241.html]- VM-infirmary

  • What is in the buffer cache

    Hello members,

    I'm trying to understand why some batch repeatedly not pio, I expect.

    One thing keeps popping up; Is it possible to see what is in the buffer cache. I can query v$, but not x$. Version is 10.2.0.3

    Something in the lines of what segments has how many blocks in the buffer, as
    SEGMENT_NAME                     BLOCKS
    ---------------------------- ----------
    SYS_IL0006063998C00032$$             18
    SYS_C0077916                        108
    STA_BONUS_REMIT                     800
    STABONREM_PK                        216
    Concerning
    Peter

    v$ BH.objd = DBA_OBJECTS.data_object_id

    -----------
    Sybrand Bakker
    Senior Oracle DBA

  • ORA-01438: value larger than the precision specified for this column?

    Hi guys:

    I'm stuck in this error when I try to do an insert into a table. My Source has 581K records, but only this code and the values described below gives me a headache.

    Here's the DDL for the source and the target.

    CREATE TABLE WRK. VL_FREED
    ('CODE' VARCHAR2 (9))
    NUMBER (15.7) "VL_FREED".
    )

    CREATE TABLE WRK. VL_RENEG
    ('CODE' VARCHAR2 (9))
    NUMBER (15.7) "VL_RENEG".
    )

    CREATE TABLE WRK. WRK_XPTO
    ('CODE' VARCHAR2 (9))
    NUMBER (15,10) "VL_XPTO".
    )

    ------------------------------------------------
    The values for the VL_FREED AND VL_RENEG tables:


    CODE = 458330728 (same on both)
    VL_FREED = 191245.3000000
    VL_RENEG = 74095.3800000

    -------------------------------------------------

    When I try to run this insert:

    INSERT INTO WRK. WRK_XPTO
    (
    CODE,
    VL_XPTO
    )
    Select
    T1. CODE,
    T1. VL_FREED - T2. VL_RENEG
    of WRK. VL_FREED T1, WRK. VL_RENEG T2
    WHERE
    (T1. CD_CODE = T2. CODE);

    I got the error:
    ORA-01438: value larger than the precision specified for the column

    But how can this be? The result of 191245.3000000 - 74095.3800000 is not greater than a number (15,10).

    Can someone help me on this?

    Number (15,10) means 15 total digits, 10 of which are to the right of the decimal separator, leaving only 5 on the left.

    190 000 - 75 000 = 115 000 (6 digits).

  • The size of the estimated value of the project is larger than the chosen target support

    I am at a loss for trying to create a project using Adobe Encore.  I tried initially to the author of the project using the CS2 version.  When you try to export, I received the following message is displayed:

    "The size of the estimated value of the project is larger than the chosen target support.  While it is an estimate, the project may not build. »

    I have created hundreds of projects in the past, but have never received this message before.  I first thought that the problem was with the release of still I used, but I got the exact error that even trying to create the project with Encore CS5.

    The project itself has a menu, a motion menu, and a single video file.  The video file 152 last minute, and I was trying to export that to a double layer DVD.  I tried from the file using CS5.5 Adobe Premier export using a variety of parameters.  I'm trying to export it to a file DVD-2, with the attributes of soure game being the highest value.  The parameter on the main concept MPEG encoder is set to 5. I tried to export the file by using the constant and variable bitrates.

    When I recently exported the file using a contant 6.5 sampling rate, I still 650,4 Mo still remaining on my project.  I still received the same error message on the size of the project, and so I think that something is wrong.

    I even exported the video file into an AVI file compressed and then tried to coding again using CS5.5 Adobe Premier and Adobe encoder.  I always get the same error message on the size of the file.

    Any help or suggestions on how to solve this problem would be greatly appreciated.  Thank you!

    If you do not finish with wasted discs...

    Create an ISO (yet), or a folder on your hard drive (still or Premiere Elements), then use the FREE http://www.imgburn.com/index.php?act=download to write files or folders or ISO on DVD or BluRay drive (send the author a PayPal donation if you like his program)

    .

    ImgBurn will read the mark of REAL from the disk drive, which isn't always the same as the label for the box (Memorex is known for 'nothing' buy and put inside a box of Memorex)

    .

    When you write on the disk with Imgburn, use speed SLOWEST possible setting, so your burner has the best chance to create "good, well trained" laser drill holes... Since no DVD drive is required to read a burned disc, have a 'good' after a try to white high quality will help

    .

    Use Taiyo Yuden simple layer or layer two Verbatim

    Or Falcon Pro for inkjet printable two layer

Maybe you are looking for

  • iCloud library take too much space

    My iPhone 7 on iCloud iOS 10.0.2 photo library takes too much space.  I have a phone of 32 GB and 16 GB of space are taken after selecting the option optimize the photos. Is there a way to limit the photos to a consume a certain amount of space?

  • Refund of software?

    How do you get the credit for an application?  Some how I managed to download the monopoly for my pixi and it never worked.  Now the app has been removed from the App Store of Palm.  How can I get my money back?  This request will never be returned t

  • USB tethering switch on HP 7 1800

    Hi, the topic says it all.  In the user manual, it reads: NOTE: USB attached must be disabled before you can use USB to transfer files between yourTablet and the computer. But it does not say how to do this. I looked under each directory settings on

  • Dv7 Volume controls do not work

    I just updated my Pavilion dv7 - 2177cl Computer Entertainment laptop from Vista to Win7.  After several hours, I've finally updated everything and working with the exception of the small bar above the keyboard that controls the Volume, Mute and cont

  • PowerEdge 2900 switchable hot

    I have a Dell PowerEdge 2900 running windows Small Business SERVER 2003, we will be taken out of service soon, but we just had a departure by car to go bad - seeing yellow flashes and error on LCD server message: 'E1 H10 HDD1 fault.'   daily backups