Tablespace query

I recently created a DB with ABC_TBS with 2 GB in SIZE and the data import in table little consumed 10MB of storage space another tablespace USER

Now, I tried to migrate data to a new table X (tablespace ABC_TBS) with the OLD table A (tablespace ABC_TBS) + B (USER tablespace) with outer join. The create statement went well.

Insert the Tablespace has been used completely like 2 GB and has made a mistake by saying

Error report:
SQL Error: ORA-01658: unable to create INITIAL extent for segment in tablespace CTB_TBS
01658. 00000 -  "unable to create INITIAL extent for segment in tablespace %s"
*Cause:    Failed to find sufficient contiguous space to allocate INITIAL
           extent for segment being created.
*Action:   Use ALTER TABLESPACE ADD DATAFILE to add additional space to the
           tablespace or retry with a smaller value for INITIAL
Rollback

When I check the size of the tablespace he says, almost used, but before migrating, he had like 2000 MB of free space

TablespaceOPPORTUNITY (MB)FREE (MB)TOTAL (MB)Pct.Free
ABC_TBS2044420480

Now when I see the USE of the INDEX and TABLE in ABC_TBS his sentence 7 MB

OWNERTABLE-NAMEUSED (MB)TABLESPACE
ABCTABLE1:2ABC_TBS
ABCTABLE1_N1_IND2ABC_TBS
ABCTABLE1_N2_IND2ABC_TBS
ABCTABLE2_U1_IND1ABC_TBS
ABCTABLE20ABC_TBS
ABCTABLE1_U1_IND0ABC_TBS
ABCTABLE1_N3_IND0ABC_TBS
ABCTABLE 30ABC_TBS

Now my doubt is, if never obtained data inserted into the NEW table, how to get 2000 MB space consumed? How can I recover it? I have no problem adding new DATAFILE, but my mystery is where the 2 GB went?

I think I got my answer, it was the table_partition consumes space.

Thank you all.

Tags: Database

Similar Questions

  • Simple Tablespace query for a beginner

    Hello

    I was wondering if someone could tell me if it was possible to run a query that selects all of a tablespace. The likes of indices, tables etc.

    Thank you very much in advance,

    Dan

    Like this?

    SQL> select segment_name, segment_type from dba_segments where tablespace_name = 'BSE_TS';
    
    SEGMENT_NAME              SEGMENT_TYPE
    ------------------------- ------------------
    BSE_LOADER                TABLE
    DAILY_TRADES              TABLE
    DL_TRD_PK                 INDEX
    GROUPS                    TABLE
    GROUPS_PK                 INDEX
    STOCKS                    TABLE
    STOCKS_PK                 INDEX
    TOP_STOCKS                TABLE
    TRADING_DAYS              TABLE
    SYS_C002892               INDEX
    
    10 rows selected.
    
    SQL>
    

    You can either use DBA_SEGMENTS or WHERE USER_SEGMENTS view.

    Asif Momen
    http://momendba.blogspot.com

  • Existing user created tablespaces database query

    Hai all,

    Is it possible to display the user created tablespaces query?

    Thank you

    This one?

    SQL> select dbms_metadata.get_ddl('TABLESPACE','USERS') from dual;
    
    DBMS_METADATA.GET_DDL('TABLESPACE','USERS')
    --------------------------------------------------------------------------------
    
      CREATE TABLESPACE "USERS" DATAFILE
      '/oradata1/TEST11/users01.dbf' SIZE 5242880
      AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M
      LOGGING ONLINE PERMANENT BLOCKSIZE 8192
      EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
     NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO
       ALTER DATABASE DATAFILE
      '/oradata1/TEST11/users01.dbf' RESIZE 78643200
    
    SQL>
    
  • Query to check tablespace and free size

    Hi all

    I Googled, but obtained joins to query to check tablespace and free size that lists all tablesapces but

    I want to check the free size, total and used size MB of a particular tablespace as "ZEUS". Only this tablesapce must be registered with the necessary details.

    Any help?

    Kind regards

    Ritu

    Try this one:

    -ts_size.sql

    all pages 49999 flax 120

    Col nom_tablespace to tru a32

    Col 'Total GB' for 999,999.9

    Col 'Used Go' for 999,999.9

    "GB" free pass to 99999,9

    'Pct Free' pass to 999,9

    Col 'Pct used' to 999,9

    sum of model of 'Go Total' on the report

    sum of model of 'Go' on the report

    sum of model 'GB Free' on the report

    break the report

    Select A.Tablespace_Name, B.Total/1024/1024/1024 'Total GB. "

    (B.Total-a.Total_Free)/1024/1024/1024 'Used Go',

    Free A.Total_Free/1024/1024/1024 "GB."

    (A.Total_Free/B.Total) * 100 "free Pct,"

    ((B.Total-A.Total_Free)/B.Total) * 100 "used Pct")

    From (Select Tablespace_Name, Sum (Bytes) Total_Free

    Of Sys.Dba_Free_Space

    Group by Tablespace_Name) has

    , (Select Tablespace_Name, Sum (Bytes) Total

    Of Sys.Dba_Data_Files

    Group by Tablespace_Name B)

    Where A.Tablespace_Name LIKE '% of ZEUS '.

    And A.Tablespace_Name = B.Tablespace_Name

    Order by 1

    /

    PS: You can easily customize to "MB".

  • How to check the size of the tablespace by query?

    How to check the size of the tablespace by query?

    Run this query...

    SELECT
    A.tablespace_name 'space ',.
    A.bytes / (1024 * 1024) "Total_Size (MB).
    Sum (B.bytes) / (1024 * 1024) "free_space (MB).
    NVL (Round (Sum (B.bytes) * 100 / A.bytes), 1) "%Free."
    Round ((A.bytes - SUM (B.bytes)) * 100/A.bytes) '% Used_Space '.
    FROM dba_free_space B,.
    (SELECT tablespace_name, SUM (bytes) bytes
    FROM dba_data_files
    A GROUP BY nom_tablespace)
    WHERE B.tablespace_name (+) = A.tablespace_name
    GROUP OF A.tablespace_name, A.bytes
    UNION ALL
    SELECT
    A.tablespace_name 'space ',.
    B.bytes / (1024 * 1024) "Total_Size (MB).
    Sum (A.bytes_free) / (1024 * 1024) "free_space (MB).
    NVL (Round ((Sum (B.bytes) - A.bytes_used) * 100 / B.bytes), 1) "%Free."
    Round ((Sum (B.bytes) - A.bytes_free) * 100 / B.bytes) '% Used_Space '.
    FROM dba_temp_files B,.
    (SELECT tablespace_name, bytes_free, bytes_used
    V $ temp_space_header
    GROUP BY tablespace_name, bytes_free, bytes_used) has
    WHERE B.tablespace_name (+) = A.tablespace_name
    GROUP OF A.tablespace_name, B.bytes, A.bytes_free, A.bytes_used
    ORDER BY DESC 4;

  • query to find or space associated with a tablespace

    Hi the gems...

    I need to import a dump in an empty a new database schema. For this I need to create two storage spaces. But there are problems of space and that's why I need assign almost the exact space for the tablespaces that I create.

    Is there so any query I can get the space occupied by the schema of the source to the corresponding tablespaces?

    >

    There is an existing database server on which is located a SOURCE_SCHEMA schema... I took a dump of who... Suppose that this SOURCE_SCHEMA is associated with two tablespaces TABLESPACE_1 and TABLESPACE_2... now I need to know the space occupied by the SOURCE_SCHEMA on these two individual this database server storage space...

    Hope this helps:

    Select sum (bytes) / 1024/1024/1024 taille_en_Go
    from dba_segments
    where owner = 'SOURCE_SCHEMA.
    and nom_tablespace = 'TABLESPACE_1 ';

  • query to find the tablespace quota

    Hi... good afternoon everyone...

    I want to know the usernames, the tablespace they use, the quota of each tablespace they feel by a query.
    I have the sys login... can u plss help me with the query?


    Thanks in advance...

    Select the username, nom_tablespace, max_bytes, bytes
    of dba_ts_quotas;

    BYTES - number of bytes to load from the user
    MAX_BYTES - quota user in bytes, or - 1 if no limit

    Published by: jazz81 on February 3, 2011 11:41

  • Tablespace growth

    Hi all

    We are 11 GR 2 2 node RAC in win2008r2. I want to get report growth tablespace in good months for the last 6 months. I would be grateful if someone can help out me. The following query gives me a month only. I want month wise. Help, please

    SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY "") days

    ts.tsname

    , max (round ((tsu.tablespace_size* dt.block_size) /(1024*1024), 2)) cur_size_MB

    , max (round ((tsu.tablespace_usedsize* dt.block_size) /(1024*1024), 2)) usedsize_MB

    OF DBA_HIST_TBSPC_SPACE_USAGE tsu

    DBA_HIST_TABLESPACE_STAT ts

    DBA_HIST_SNAPSHOT sp

    Dt DBA_TABLESPACES

    WHERE tsu.tablespace_id = ts.ts #.

    AND tsu.snap_id = sp.snap_id

    AND ts.tsname = dt.tablespace_name

    AND ts.tsname NOT IN ('SYSAUX', 'SYSTEM')

    GROUP OF TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY ""), ts.tsname

    ORDER BY ts.tsname, days

    /

    Can you please send the output you get.

    The above query that generate the day wise result (and not wise months).

    For the report month wise, you must use TO_CHAR (sp.begin_interval_time, 'MM-YYYY') instead of TO_CHAR (sp.begin_interval_time, 'DD-MM-YYYY') or use trunc (begin_interval_time, 'MM') instead of TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY "")

    If you want data day wise but you have found only 1 month data, this means that older data is purged from your database.

    You can check this by using query below to see how old instant you have.

    Select min (sp.begin_interval_time) DBA_HIST_SNAPSHOT sp.;

    Concerning

    Anurag.

  • Using bulk collect into with assistance from the limit to avoid the TEMP tablespace error run out?

    Hi all

    I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.

    We use Oracle 11 g R1.

    I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.

    I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.

    We have about three tables with more than 26 million records.

    It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.

    I googled on this topic and retrieve the tips:

    Use NO LOG

    Parallel use

    BULK COLLECT INTO limited

    However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.

    I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.

    The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.

    I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.

    But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)

    I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.

    Thanks for reading this.

    Ann

    I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!

    A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.

    Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption.  Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.

    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)

    IS

    l_memory NUMBER;

    BEGIN

    SELECT st. VALUE

    IN l_memory

    SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm

    WHERE se.audsid = USERENV ('SESSIONID')

    AND st.statistic # nm.statistic = #.

    AND themselves. SID = st. SID

    AND nm.NAME = 'pga session in memory. "

    Dbms_output.put_line (CASE

    WHEN context_in IS NULL

    THEN NULL

    ELSE context_in | ' - '

    END

    || 'Used in the session PGA memory ='

    || To_char (l_memory)

    );

    END show_pga_memory;

    DECLARE

    PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)

    IS

    CURSOR source_cur

    IS

    SELECT *.

    FROM YOUR_TABLE;

    TYPE source_aat IS TABLE OF source_cur % ROWTYPE

    INDEX BY PLS_INTEGER;

    l_source source_aat;

    l_start PLS_INTEGER;

    l_end PLS_INTEGER;

    BEGIN

    DBMS_SESSION.free_unused_user_memory;

    show_pga_memory (limit_in |) "- BEFORE"); "."

    l_start: = DBMS_UTILITY.get_cpu_time;

    OPEN source_cur.

    LOOP

    EXTRACTION source_cur

    LOOSE COLLECTION l_source LIMITED limit_in;

    WHEN l_source EXIT. COUNT = 0;

    END LOOP;

    CLOSE Source_cur;

    l_end: = DBMS_UTILITY.get_cpu_time;

    Dbms_output.put_line (' elapsed time CPU for limit of ')

    || limit_in

    || ' = '

    || To_char (l_end - l_start)

    );

    show_pga_memory (limit_in |) "- AFTER");

    END fetch_all_rows;

    BEGIN

    fetch_all_rows (20000);

    fetch_all_rows (40000);

    fetch_all_rows (60000);

    fetch_all_rows (80000);

    fetch_all_rows (100000);

    fetch_all_rows (150000);

    fetch_all_rows (250000);

    -etc.

    END;

  • Selection of understanding... from the query timestamp

    Hello

    I'm stuck with this scenario where we offer production data the developer to achieve a purpose test by altering the critical columns of the company by an arbitrary value continues the update statement.

    Now the problem is when we fire select... from the timestamp request, we are able to display the old corrupted data. I created an example test case scenario:

    CREATE TABLE test1 (identification number);

    added values:

    SQL > select * from test1;

    ID

    ----------

    1

    2

    3

    4

    5

    5

    5

    5

    5

    5

    10 selected lines.

    SQL > update test1 ID = 3 where id = 5;

    6 lines to date.

    SQL > commit;

    Validation complete.

    Now the data in the table are:

    SQL > select * from test1;

    ID

    ----------

    1

    2

    3

    4

    3

    3

    3

    3

    3

    3

    10 selected lines.

    Now when I fire from the timestamp query I am able to see old data:

    SQL > select * from test1 from sysdate timestamp - 5/1440;

    ID

    ----------

    1

    2

    3

    4

    5

    5

    5

    5

    5

    5

    10 selected lines.

    SQL > select flashback_on from database v$.

    FLASHBACK_ON

    ------------------

    NO.

    SQL > show parameter recyclebin;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    Recyclebin OFF string

    Now, I would like to know where oracle recover old data. According to my knowledge, it's going to be fetch these data to undo tablespace If yes then I would like to know is possible to stop this because this could expose the old data of the contracting authority.

    Note:-this isn't a good way to hide the data, but it is a call from management for not no opt data tool because of the license of masking

    You could do a DOF that is not actually change anything:

    orclz > dept desc;

    Name                                                        Null?    Type

    ----------------------------------------------------------- -------- ----------------------------------------

    DEPTNO NOT NULL NUMBER (2)

    DNAME                                                                VARCHAR2(14)

    LOC                                                                  VARCHAR2(13)

    orclz > alter table dept change (dname varchar2 (14));

    Modified table.

    orclz > select * from the Department as of timestamp(systimestamp-1/24);

    Select * from the Department as of timestamp(systimestamp-1/24)

    *

    ERROR on line 1:

    ORA-01466: unable to read data - table definition has changed

    orclz >

    --

    John Watson

    Oracle Certified MAster s/n

  • Feature of the parallel execution of SQL query

    Hello guys,.

    I installed a 11 GR 2 Oracle RAC (11.2.0.3) database, but I see the use of the feature of running parallel SQL queries, but I never used that.

    I would like to know if this feature is enabled by default on CCR environments and if it does not require me to pay for Enterprise Edition.

    NAME DETECTED_USAGES FIRST_USA LAST_USAG CURRE

    ---------------------------------------------------------------- --------------- ----- --------- ---------

    Checking Options 1 TRUE 9 October 15 October 9, 15

    BMG Auto setting 1 TRUE 9 October 15 October 9, 15

    Memory of the execution of SQL auto 1 TRUE 9 October 15 October 9, 15

    Automatic management of the space Segment (System) 1 TRUE 9 October 15 October 9, 15

    Automatic Storage Management 1 TRUE 9 October 15 October 9, 15

    Management of the Undo Automatic 1 TRUE 9 October 15 October 9, 15

    Character set 1 TRUE 9 October 15 October 9, 15

    Deferred Segment creation 1 TRUE 9 October 15 October 9, 15

    Locally managed tablespaces (System) 1 TRUE 9 October 15 October 9, 15

    Locally managed tablespaces (user) 1 TRUE 9 October 15 October 9, 15

    Logfile multiplexing 1 TRUE 9 October 15 October 9, 15

    Oracle Java Virtual Machine (System) 1 TRUE 9 October 15 October 9, 15

    Oracle utility of metadata API 1 TRUE 9 October 15 October 9, 15

    1 SQL query in parallel TRUE 9 October 15 October 9, 15

    Partitioning (System) 1 TRUE 9 October 15 October 9, 15

    Real Application Clusters (RAC) 1 TRUE 9 October 15 October 9, 15

    Recovery zone 1 TRUE 9 October 15 October 9, 15

    SECUREFICHIERS (System) 1 TRUE 9 October 15 October 9, 15

    SECUREFICHIERS (user) 1 TRUE 9 October 15 October 9, 15

    The parameter server file 1 TRUE 9 October 15 October 9, 15

    Thanks in advance,

    Franky

    You are not, but Oracle is, since you are using RAC.  Oracle relies on execution in parallel against the gv$ views so even if you can't use it in SE way common Oracle can when it comes to query against one of the points of view gv$.  That's why DBA_FEATURE_USAGE_STATISTICS reports that you see.

    David Fitzjarrell

  • ALTER table move online in an encrypted tablespace

    Hello

    Our DBA want to put all our data in encrypted storage.

    Once they create encrypted tablespaces, we will have to do «alter table move online...» "orders for encrypted storage space around the tables.

    Has anyone here done already?   Keeps the table moved really 'on-line' (available at query/insert/update) during the move?  Or will we have to wait some time stop/downtime while the tables are moved?
    What about the tables with long or CRAFT?

    Thank you!

    KSandberg and to add to the recall of John that only ITO tables can be moved online, if the table needs to be moved offline the indexes would be invalid until rebuilt.  If the table is an IOT I suggest you rebuild the index online after the move because even if the indexes are always used the logical ROWID is more pointing to the correct location and access of the index will be less effective that will you rebuild the index.

    - -

    The dbms_redefinition package could be an alternative solution if you need to move the tables no - ITO online.

    - -

    HTH - Mark D Powell.

  • Oracle Multitenant - interesting anomaly of ID of container for tablespaces

    Let's say we have a container named cdb1 with some inside containers Multitenant database:

    sys@CDB1 > select con_id, name of the containers of v$.

    CON_ID NAME

    ----------      ------------------------------

    1 ROOT OF $ CBD

    2 SEEDS OF $ PDB

    3 PDB1_1

    Cool, the root container contains common things, the seed contains a model for the creation of new databases-ins and pdb1_1 contains our data.

    Databases have storage space, lets take a look at them:

    sys@CDB1 > select ts #, name, con_id

    v $ tablespace

    order by ts #, con_id;

    NAME CON_ID TS #.

    ----      ---------           ----------

    0 SYSTEM 1

    0 SYSTEM 2

    0 SYSTEM 3

    SYSAUX 1 1

    SYSAUX 1 2

    SYSAUX 1 3

    UNDOTBS1 2 1

    2 TEMP 2

    2 TEMP 3

    3 TEMP 1

    3 USERS 3

    4 USERS 1

    EXAMPLE 4-3

    Cool, we have three containers, and each tablespace is associated to a container, including the tablespace UNDOTBS1 which is used by all containers, but who lives in the container 1.

    Once we connect to our base data plug-in, pdb1_1 we expect to see only the storage space that we have access to in pdb1_1 (and of course, we don't see that):

    sys@PDB1_1 > select ts #, name, con_id of the order of v$ tablespace by ts #, con_id;

    NAME CON_ID TS #.

    ----      ----------      -------

    0 SYSTEM 3

    SYSAUX 1 3

    UNDOTBS1 2 0

    2 TEMP 3

    3 USERS 3

    EXAMPLE 4-3

    And we see exactly what we expect to see, all our storage spaces with the tablespace UNDOTBS1 in container 1... But it's not. UNDOTBS1 is now container 0.

    What? How and why the UNDOTBS1 jumped containers?

    So why, when I'm in the root container, do CANCEL from container 1? Why is it not 0?

    It's the same exact query in CBD$ ROOT and PDB1_1, but it returns different values for the line "even". Of course views $ v are not guaranteed to be consistent, but I suspect that the value passed between select statements.

    The '1' for the root refers to the root owns. Which allows root to modify. A '0' means it is shared and cannot be changed. That is why the '0' is used in the PDB views.

    If you look at the DDL for the TABLESPACE of $ V you will see that the value is mapped to '0' for PDB.

  • Move the table in same tablespace is not reorganize the data

    Hello.

    I am facing a problem that I have not used to have.  First of all, a description of our envorinnement:

    We have a few large tables partitioned and performance optimization, our ETLs use bluk, add notes, parallelism and so on.  This create several holes of unused space in tablespaces/data files as well a kind of leak of space on our drives.

    A complete correction would re-create the tablespaces move everything is of opposes another.  It would be impratical, because there are about 15 who are top of 100 GB; the time and effort to recreate everything is not affordable for the Business.

    Instead, we have a single proc that comes to calculate the actual amount of used space (converted to blocks) and makes a move of all objects above this block_id.  Just after this operation, there is a dynamic shrink based on the new HWM (given that the objects have been moved) on the data file freeing disk space.  As we have a datafile by tablespace and a tablespace by schema, we would like to keep this body, if we make a single movement for objects, like 'ALTER TABLE' | owner: '. ' || nom_segment | "MOVE; "(the complete query works with all types of data such as partitions of table objects, the index partitions and the subpartions).  This will move the object in the same space for the first freespace on the tables and free up space at the end of the file to shrink.  In theory.

    This unique proc used to work properly.  In a 650 GB GB 530 tablespace in use moving about 20 that Go (the amount of data beyond the HWM 530 GB) is simpler than to create a new file/TBS and the displacement of 20 GB is faster than Go 530.

    But suddenly things changed when some TBS refused to be narrowed.  What I found out: the command move doesn't fail, it works very well and Oracle really moves the object.  But for reasons that I don't know, he's not moving it at the beginning of the file, it keeps the object at the end.  So the da calculates the new HWM, but because some objects that were in the tail of the queue, the shrink is done with a very high HWM, if no real space is reclaimed.

    So, the main question: How does the ALTER TABLE FOO MOVE really works?  I thought that it would be always to move the object to the beginning of the file thus reorganize, but I analyzed the last objects that gave me this problem (block_id before and after the move, compared to block_ids empty and everything) and actually, I see that they were moved at the end of the file, although there is enough space to accommodate initially.

    Okay, I think I found the problem.  Before that I just pulled the script as posted, but then I had the good idea to improve its performance with parallelism, so I added:

    ALTER SESSION FORCE PARALLEL QUERY 16 PARALLELS;

    ALTER SESSION FORCE PARALLEL DDL PARALLEL 16;

    ALTER SESSION FORCE PARALLEL DML PARALLEL 16;

    Returning to prallel not running, that I could reuse the freespace on the beginning of the file, and then narrow it down.

    Obviously, each writing data in parallel mode reuse freespace, I just forgot that a TABLE ALTER MOVE is also a data write operation.  I fell a bit ridiculous, caught in the same trap that I was trying hard.

    Thank you all for the comments and advice.

  • Adding host name and the name of tablespace in the alarm e-mail

    I'm trying to figure out how I can put the host name and the name of the table space in the e-mail alarms.

    Rule name: _DBO - use of Tablespace

    Scope of the rule: DBO_Cust_DBO_Cust_Tablespace_pct2 where nom_tablespace! like '% UNDO % '.

    Tried the following queries to run the query Condition and did not

    return scope.get("tablespace_name").get ("value")

    return scope.get("tablespace_name").get ("name")

    Do you have any suggestions on how to display the host name and the name of tablespace in the alarm email?

    Using scope: DBO_Tablespace

    hostname = scope.get('monitoredHost/name')

    tablespaceName = scope.get ('tablespace_name')

    Kind regards

    Brian Wheeldon

Maybe you are looking for