Size of the table in R12

Hello:

I need to know the size of the AP_INTERFACE_CONTROLS table in the 12.1.3 and 11.1.0.7 database. I just got this requirment. How to calculate? I Googled, but it did not apply to this.

Thank you for your help.

873768 wrote:

Hello:

I need to know the size of the AP_INTERFACE_CONTROLS table in the 12.1.3 and 11.1.0.7 database. I just got this requirment. How to calculate? I Googled, but it did not apply to this.

Thank you for your help.

Query DBA_SEGMENTS - https://forums.oracle.com/thread/1113909

SQL> select segment_name, segment_type, bytes/1024/1024 MB
from dba_segments
where segment_type='TABLE' and segment_name='';

Thank you

Hussein

Tags: Oracle Applications

Similar Questions

  • Size of the table only includes not Keynote - can not enter directly px

    I inserted a table on a slide, and he made two unexpected things:

    1. it will not allow me to directly enter a width of table in pixels - as soon as I enter a value, it returns to the original value. The only way I can change the size of the table is with the up/down arrows, essentially pushing a pixel at a time.

    2. when I add an additional column, rather than tighten it in the width of the table by adjusting the size of the existing columns, it blows the table out of the slide area, and I have to manually resize the table to accommodate.

    I have looked through all the parameters of the table and just cannot find why it behaves in this manner, which is different from the other tables I've created in the past.

    1. You are right. Selecting the Table tab and reduce line and column values will shrink your table, but the table resizing is not supposed to work this way.
    2. Control-click or two fingers press the alphabetical table header and select Add column after from this menu. The new column is added in place, and the other columns shrink and walk left to accommodate the new column, while the table itself does not move.
  • How to find the size of the table?

    Hi all

    Can anyone suggest how to find the size of the table? I had a few (15272 selected lines) on the name of "CMPT_" How can I check only 'CMPT_' size tables?





    Please someone help give the SQL query to run.


    Thanks in advance.


    Vincent

    madala03 wrote:

    Hi thanks for the reply

    I'm out like below

    SUM(BYTES/1024/1024)

    --------------------

    25383.25

    But how to check all the name of tables CMPT and their sizes?

    Select nom_segment, dba_Segments sum(bytes/1024/1024) where nom_segment like ' % CMPT GROUP BY nom_segment;

  • Given CLOB not allowing the size of the table cut!

    Hi Experts,

    Environment: 11.2.0.3. on Solaris 10.

    We have a table that contains the CLOB data and this table takes about 111 GB!

    SQL > desc INFO_MESSAGES

    Name                                      Null?    Type

    ----------------------------------------- -------- ----------------------------

    CLNT_OID NOT NULL VARCHAR2 (16)

    USR_OID NOT NULL VARCHAR2 (16)

    LAST_client_msg_ID VARCHAR2 (36)

    LAST_client_msg_DATE DATE

    LAST_client_msg CLOB

    LAST_USR_MSG_BOD_ID VARCHAR2 (36)

    DATE OF LAST_USR_MSG_DATE

    CLOB LAST_USR_MSG

    Select * of dba_segments order by bytes DESC;

    MIGRTN SYS_LOB0000111131C00008$ $ LOBSEGMENT SAMS DATA1 20690 6 1963 14528000 119013376000 <-the top row

    This segment is to table INFO_MESSAGES. I confirmed by joining in ALL_LOBs.

    This segment: SYS_LOB0000111131C00008$ $ belong to this column: LAST_USR_MSG

    The column LAST_USR_MSG were originally from huge text data, each line having approximately 4 MB in size. To reclaim the space, we have put this column of small as "data truncated" data using an update statement for the half of the rows in the table.

    as:

    Update INFO_MESSAGES set LAST_USR_MSG = "data truncated" where rownum < 25001 (this where clause was based on other criteria on a real mission but he updated 25000 lines - half of the table).

    Now after having done that, the size of the table has not changed!

    The table is still 120 GB. So what should he do to recover the space here? Should we export, truncate, and import the table - or can it be reconstructed (without causing locks and allowing application to access it or there at - it other better options?)

    Thank you

    OrauserN

    You must use dbms_redefintion, at the same time, change BASICFILE (have you been to) to SECUREFILE

    The latter LOBs will reorganize automatically.

    Demo at http://www.morganslibrary.org

    -----------

    Sybrand Bakker

    Senior Oracle DBA

  • size of the table

    How to check the size of the table?
    Select segment_name,Bytes/1024/1024 from DBA_SEGMENTS;
    
    or
    
    SELECT SUM(bytes)/1024/1024 MB
    FROM dba_segments
    WHERE segment_type ='TABLE'
    AND segment_name=
    
  • difference in size of the Table in oracle and timesten

    Hi all


    I have a large table with 2 million records,

    I see no big difference in the size of the table in oracle and Timesten
    In oracle table size to 4 GB, but in Timesten is arround 15 GB (using ttSize for 2 M lines)

    Could you please tell me what could be the cause of this difference?
    Is the size of the table in Timesten is always more than oracle?
    What are the factors and parameters affecting the size of Perm?

    It is typical for the storage needs for a DataSet to be significantly larger in TimesTen in Oracle. This is due to the Organization of the very different internal storage in TT from Oracle; Oracle is optimized to save space while TT is optimized for performance.

    Ways to minimize these costs are:

    1. make sure you use TimesTen 11.2.1. This has some characteristics compare compact (minor) and earlier versions.

    2 assess the use of numeric types; native types TimesTen (TT_TINYINT, TT_SMALLINT, TT_INTEGER and TT_BIGINT) use less space than MANY and longer by the effective calculation as well.

    3. check use of data of variable length (VARCHAR2, VARBINARY, NVARCHAR) and the trade-offs between online and online storage (see documentation for the compromise between these options stirage TT).

    Even when you use the foregoing, you will still see a storage important "inflation" for TT from Oracle.

    Chris

  • Size of the table is too big Performance problem.

    Hello

    Suppose that we table that has about 160 columns inside. About 120 of these columns are Varchar data type with about size 100-3000 each column.

    This table has also about 2 million lines in there. I don't know if they are considered large tables of sizes?

    Like these tables a good representation of the data. I'm in doubt because the size of the table is very large and may take a long time for queries. We have about 10 indexes on this table.

    What kind of precautions should be taken when this kind of tables are involved in the database and they needed for the application.

    Version of database is Oracle 10.2.0.4

    I know that the question is a little vague, but I wonder what needs to be done and where I start digging the question just in case I have performance issues while trying to select the data or update the data.

    I also want to know if there are any size idle for tables and anything that is more that what needs to be treated differently.

    Thanking you
    Rocky

    Any table with more than 50 columns should be regarded with suspicion. This does not mean that it is appropriate for uses no tables with columns 120 or 220, but that doesn't mean that they are quite rare.

    Which doesn't bother me on your first paragraph is the number of columns of text with sizes of up to 3 K. It is very revealing of poor design. One thing is for sure... no one is a report and printing on what anyone smaller than a plotter.

    2 rows at point M is small by almost any definition so I wouldn't worry on this subject. Partitioning is an option, but only if the size of partition can translate to work with your queries and we have not seen any of them nor would we have no idea what you can use as a partition key or the type of partitioning so any intelligent discussion of this option would require a lot more information from you.

    There is no precaution that relate to what you wrote. You have told us nothing on the safety, use, volumes of transactions or any other important thing in such a review.

    What needs to be done in the future, is for someone who understands the normalization to look at this table, review the rules management, reviewing the purpose to which it will be put, and especially reports and the results that will come against her and no longer justify and change the structure. Then with an evaluation of the completed table... you must run SQL and examine the plans generated using DBMS_XPLAN and timing compared to your Service Level Agreement (SLA) with clients from the system.

  • change the size of the table and remove the duplicate line

    Hello

    I run a program that does a for loop 3 times.  In each loop, the data are collected for x and y points (10 in this example).  How can I output the table with 30 points in a simple 2D array (size 2 x 30)?

    Moreover, from the table, how to delete duplicate rows that contain the same value of x?

    Thank you

    hiNi.

    He probaibly which would have allowed to attach a file that has a consecutive duplicate lines.

    Here is a simple solution.

  • size of the table on disk

    Hello

    Wanted to know the size of a table on the disk.  Assume I have a Table with 10 columns...  Only, Varchar2, number and Date fields.  For example, of same data in the CSV file with all fields in complete occupation like the VARCHAR2 data size field and have 100 000 files occupy 100 MB on the disk.  In this case just wanted to know, table also occupies space on the disk?  If not, what is the approximate percentage of diviation?

    If there is an index defined on 3 columns of the table and the types of fields for these 3 columns would be, field Date and a numeric field, third field as field Varchar2 (100) respectively.  In this case, about how much additional disk space will be ustilized in percentage?

    Please note that I am not interested on any other sizing as DB dimensions, other dimensions of the diagram, software size, etc., etc., I wanted to just know, if I have a 1 GB of data in the CSV file and I load in the Table, will take an equal amount of size on the disk or there is a major difference.  Of course, I know that because of the definition of the width to the table field, there is a major change in the use of disk space.  However, for highest estimate, just consider that Varchar2 column size and the actual data in CSV is even.

    Thank you

    -Anand

    The best way to find out – load a representative sample of data and check the size of the segments / the index of the table.

    The size will be different, for sure. DATEs are 7 bytes. VARCHAR2 takes at least one byte over the data size. NUMBER would take usually less space it is used by the CSV format.

    In addition, the Oracle tables are stored in a heap structure and generally have unused space allocated to the table.

    John

  • Increase in size of database extremely - need to monitor the size of the table

    Hi all

    I need help in the growth of the follow-up tables. It's that the database grows enormously from 192 G at 200 over five days, representing 5% increase in database size. The general scenario is that is that the db is growing 20% annually, but this increase in a week will cause future hardware problems.

    Please advice to find the tables that is still growing.

    Thank you
    DBA Junr.

    Hi Georges s/n, do not know if this will help you now, but I did something like this on a daily basis for about 3 years. Literally, I have the size of each segment on a daily basis for this time and can go back in the graphic history of growth. If I had the question you ask the subject that I would be able to tell you exactly what segments are developed. If you need this on a smaller time frame, you can change the data stored in the column "created_date' to all the hours I guess.

    Anyway, here's the create statement.

    create table db_segment_history
    (
      db_segment_history_id     integer not null,
      owner                 varchar2(30),
      segment_name          varchar2(81),
      partition_name        varchar2(30),
      segment_type          varchar2(18),
      created_date          date default trunc(sysdate) not null,
      tablespace_name       varchar2(30),
      header_file           number,
      header_block          number,
      bytes                 number,
      blocks                number,
      block_size            number,
      extents               number,
      initial_extent        number,
      next_extent           number,
      min_extents           number,
      max_extents           number,
      pct_increase          number,
      freelists             number,
      freelist_groups       number,
      relative_fno          number,
      buffer_pool           varchar2(7)
    );
    
    create sequence db_segment_history_seq cache 1000;
    
    create unique index xpkdb_segment_history on db_segment_history( db_segment_history_id );
    
    alter table db_segment_history add ( constraint xpkdb_segment_history primary key ( db_segment_history_id )
    using index );
    
    create unique index ak1db_segment_history on db_segment_history(
         owner, segment_name, partition_name, segment_type, created_date );
    
    create trigger db_segment_history_bir
    before insert on db_segment_history
    for each row
    begin
         select     db_segment_history_seq.nextval
         into     :new.db_segment_history_id
         from     dual;
    end;
    /
    

    This is the insert / select that I use.

    insert into db_segment_history(
         owner, segment_name, created_date,
         partition_name, segment_type, tablespace_name,
         header_file, header_block, bytes,
         blocks, block_size, extents,
         initial_extent, next_extent, min_extents,
         max_extents, pct_increase, freelists,
         freelist_groups, relative_fno, buffer_pool )
    select  owner, segment_name, trunc( sysdate ),
         partition_name, segment_type, tablespace_name,
         header_file, header_block, bytes,
         blocks, bytes/blocks block_size, extents,
         initial_extent, next_extent, min_extents,
         max_extents, pct_increase, freelists,
         freelist_groups, relative_fno, buffer_pool
    from    dba_segments
    where   tablespace_name in(
              select tablespace_name from dba_tablespaces where contents = 'PERMANENT' );
    
    commit;
    

    I hope this helps.
    Michael Cunningham

    Published by: Michael C on February 21, 2012 14:19

  • Size of the table space increases too quickly (DEV_MDS of soa suite 11g)

    I use Oracle 11g SOASuite.

    We are facing a problem of database (table space). After each schema deployment that dev_mds increases of 40 MB. We are therefore very often our processes in the development stage. and while cancellation of the deployment of the process. Ideally the memory should be freed. but it's not free memory.

    Please, help me out of this problem. and it will be appreciated if someone will explain how this schema size increases too quickly.

    Thank you

    Whenever you cancel the deployment a composite of a partition in the composite EM console gets also deleted from the MDS store. But since you must keep all versions of your composite, that your MDS store is bound to grow with each deployment.

    As you mentioned that your size of the composite is huge due to activity of embedded java, it's something that your design should be avoided because there is no resuability for your component java embedded in several versions of the same composite or composite.

    If you are wanting to use Java in your BPEL process I recommend to use a resalable composite with a context of spring that you can reference from different composite materials or different versions of the same composite.

    It also gives you an advantage in terms of modularisation your application, because whenever you need to change your composite Java or BPEL, they can be changed and deployed independently.

  • Size of the data in a table and the size of the table

    I am trying to determine the size of the data in a table and also the overall table size. I use the following query:

    SELECT BOTTOM (a.owner) as owner,
    LOWER (a.table_name) AS table_name,
    a.tablespace_name,
    a.Num_Rows,
    ROUND ((a.blocks * 8 / 1024)) AS size_mb,
    a.Blocks,
    a.Blocks * 8 Blocks_Kilo_Byte.
    a.PCT_FREE,
    a.compression,
    a.Logging,
    b.bytes / 1024 / 1024
    From all_tables a, dba_segments b
    WHERE the a.owner AS SUPERIOR ("USER_TEST")
    AND a.table_name = "X_TEST_TABLE."
    AND b.segment_name = a.table_name
    AND b.owner = a.owner
    ORDER BY 1, 2;

    Is this the right way to go about finding the size of the data in a table? If this isn't the case, please give your suggestions.

    BTW, this in a 10g version.

    You probably want to use the DBMS_SPACE package. In particular, the procedures SPACE_USAGE and UNUSED_SPACE to get an accurate account of the use of space in a table. Your application may give you a relatively accurate estimate if the optimizer on your table's statistics are reasonably accurate estimates. But there is no guarantee that the optimizer statistics are accurate.

    If you want just an approximate answer and you're comfortable that your statistics are accurate, this query may be close enough. If you want a specific response, however, use the DBMS_SPACE package.

    Justin

  • Video size in the table of contents by using insert video slide

    Hello is it possible to increase the size of the video in the table of contents, it automatically resizes it to 144 of the height and width 196 and I wish it was larger than the original size is greater. Or I can't do and I need to use another method?

    Thank you

    Xav

    Hello and welcome to the forum,

    No, I'm sorry, you won't be able to increase the size of this table of contents video.

    You could insert the video on each slide or, if you do not use Captivate 5, insert it on a master slide that you apply to all slides?

    Lilybiri

  • After defragamentation, the size of the table increases...

    Hi all

    I have seen this many times. But do not know why this happened. Sometimes, when move us / defragment the table. This is to increase the size kindly put some light on this... This command we used genrally is mentioned below...


    ALTER table < table name > move 20 Parallels;
    ALTER table < Table name > noprallel;


    Regds
    Rahul

    I guess I would go back to the question of why you try to defragment a table in the first place (and defragmentation isn't even the right word here - at best you are repacking of the data). Unless you have decreased continuously about the size of a table or all your inserts are by direct path loads, it is really unnecessary to exercise, in order to try to predict which tables will increase slightly and that might shrink briefly a bit is a bit useless in practice. Oracle is very well and very happy, to use space for subsequent insertions deletions - there really not need to try to pack up the data. Given the way PCTFREE and PCTUSED work, assuming that you have chosen the reasonable parameters, you better let go the lines through their normal life cycle, rather than moving data between storage space.

    Justin

  • more great table DB you have programmed in? Is the size of the table...

    What is the largest table of database you have programmed in? Do you have the size of the layout of the table of design decisions?


    Please answer...


    .. .thnks a lot

    Published by: user2701622 on September 28, 2008 22:11

    While designing a lot any application table thing must be considered, especially in the case of OLTP database. The flow is the response time must be very fast. DSS and Datawere housing queries take the time it is not necessary to be a quick answer.
    Amount of growth data in a table of operation in the future will be an important factor that can affect the performance of the application. So while the application design that need to be considered.
    Appropriate indexing must be maintained. Appropriate purge system (i.e. moving the old data in the history tables) should be maintained.

Maybe you are looking for