Limit of Maxextent reaching index Segment

Oracle Version 9.2.0.8.0 - 64 bit Production

IBM AIX operating system version


Hello


I am facing a problem with Index segment limit maxextents.

The EXTENSIONS MAX_EXTENTS columns and in the DBA_SEGMENTS for this Segment of the Index is 1.

I tried to increase this size maxextents for this segment, but I get this error message.

ALTER index... storage (maxextents 5)
*
ERROR on line 1:
ORA-25176: specification of storage not allowed for the primary key


I found that the Tablespace containing this segment is managed dictionary.

Is this problem because of the storage space managed dictionary?

How to solve this problem?

What is an index organized table?
If so you can't change the definition of the storage of the index - you need to do this on the table associated with the altar storage (...) syntax table.

Tags: Database

Similar Questions

  • IIS 7.5, the NETBIOS command limit has been reached

    We use IIS services and media to stream videos. It comes to our dedicated streaming server. On rush hour, an error showed by saying 'the NETBIOS command limit has been reached.

    More info on platform:

    • IIS version is 7.5
    • OS version is Windows Server 2008

    Any info on this would be helpful extreme.

    Thank you


    Hi, Pranab,

    Error message "the NETBIOS command limit has been reached" in Windows Server 2003, Windows XP, and in Windows 2000 Server

    http://support.Microsoft.com/kb/810886

    NETBIOS command limit has been reached

    http://social.technet.Microsoft.com/forums/en-us/w7itpronetworking/thread/fa2aa6f2-9f3e-4dd0-B203-60f910767574

    If you need more information, repost your question to the TechNet Forum

    http://social.technet.Microsoft.com/forums/en-us/w7itpronetworking/threads

  • Index segment vs data segment

    When I create a storage space for data or index, do I need to do something special for index?

    What is a segment of the Index?

    Thank you.

    You don't need to do something special for index segments - tablespaces can contain.

    Have a read wrong on separating the index/tables (i.e. There is no requirement of practice):

    http://asktom.Oracle.com/pls/Apex/f?p=100:11:0:P11_QUESTION_ID:901906930328

    The following documentation explains the types of segments:

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14220/logical.htm#CNCPT301

    Hope that helps.
    Paul

  • LOB index segment

    Hi all:
    LOB Index segment not deleted when you delete the table containing the LOB. Is this a beaviour expected?

    1 create the table
    SQL > Create table DemoLob (a number, B clob)
    2 LOB (b) STORE AS lobsegname
    1 j
    4 TABLESPACE datseg
    5 (INDEX) lobindexname
    6 TABLESPACE idxseg
    7)
    4%
    9 TABLESPACE tabseg;

    Table created.

    2 drop table

    3. the DataDictionay query
    SQL > select nom_segment, segment_type, nom_tablespace, bytes/1024/1024 of dba_segments where nom_segment = 'LOBINDEXNAME ';

    NOM_SEGMENT, SEGMENT_TYPE BYTES/1024/1024 NOM_TABLESPACE
    ------------- ------------------ ------------------------------ ---------------
    LOBINDEXNAME LOBINDEX DATSEG.0625


    Thank you
    San ~

    This is because the table is always in the recyclebin. If you use a fall with the purge option or if you are serving the recyclebin then the entry will disappear.

  • How to limit the index field filter to display in standard integration page?

    Hi all

    I have a question here, how do I limit the index fields of standard display configuration check page? I noticed, there are also many fields unrelated to display standard failed in the page. Can I limit only the useful indexes appear in standard integration page? and how can I do?

    Thank you for your help.

    If you want to hide in the world, including all profiles too, you should be able to hide on the interface.

    Configuration Manager > Information fields > select > Edit > uncheck the box: turn on the User Interface > Ok

    When you do that, you may just want to remove the field.

  • Creative cloud sign - in limit is reached

    After being signed to CC, I added the code incorrect sign-in. This introduces a window indicating that the limit has been reached that this sign is already used on two computers.

    Now, I would like to re-sign-in, but no matter what I try and do, either by clicking on suite "you disconnect other computers, so I can sign here ' or 'Activate with a different Adobe ID', the Application Manager closes on me.

    This is the case with every CC program.

    Hi Babraham,

    Open the link below and check if sign out of the account page help

    Connect and disconnect activate Cloud Creative applications

    Or

    "Activation limit reached" or "connection error impossible ' with Adobe applications

    Let us know if this helps!

  • error when the test server - user has reached the limit or the firewall is not blocking. Using Godaddy as a host

    Try to download files on godaddy by Dreamweaver CC 2015.2 - an FTP error occurred - cannot make connection to the host. Maximum number of users reached or not authorized to make the connection because of local firewall blocks FTP data.

    Have turned off windows firewall and other antivirus software. I'm not sure which limit the means reached.

    Help, please

    "Limit reached" means that the disk is full or you have reached the maximum of files you can store on your remote server. If necessary, you will need to remove some files before you can upload more.

  • Reached for Captivate 5.5 activation limit

    Hi, I was told by a representative of online chat Adobe they deal with is no longer supported for Adobe products except through the forums and so I was re-directed to my question here. I recently lost access to my two previous installations of Captivate 5.5 because facilities were residing on the hard drives have been reformatted. Thus, I can not disable the facilities and when trying to activate the product on my new installation it says that my activation limit has been reached. Can someone from Adobe please reset my activation limit so that I can activate the product on my computer?

    Could you please give me your serial key and the email address under which he is registered as a private message to help you.

    Kind regards

    Rajeev.

  • No LIMIT set in parameter loop index?

    Hello
    Adam in my DB procedure that performs a loop through "a number range (I / p param)..." Does a logical operation... and then inserts these values into a table...
    It worked well when a smaller range was adopted. But if the range is passed 11-digit it throw error: "ORA-01426: digital overflow.

    The code snippet where his default: -.
    BEGIN

    FOR I IN 10010010010 + 1... 10010010013
    LOOP
    DBMS_OUTPUT. PUT_LINE (I);
    END LOOP;

    EXCEPTION
    WHILE OTHERS THEN
    DBMS_OUTPUT. PUT_LINE (SQLERRM);
    END;

    Can any lemme know plss if there is no defined limit on the parameter index in a loop?

    Thank you
    Siham

    >
    I tried using while and LOOP statements too. But it did not work.
    >
    8-0

    SQL> declare
      2  t number :=10010010010+1;
      3  t_end number := 10010010013;
      4  i number;
      5  begin
      6    i:=t;
      7  while ( i<= t_end)
      8    loop
      9      dbms_output.put_line(i);
     10      i := i+1;
     11  end loop;
     12  end;
     13  /
    
    10010010011
    10010010012
    10010010013
    
    PL/SQL procedure successfully completed
    
    SQL> 
    

    See also http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#i22289

    Published by: Alexandr November 30, 2011 09:30

  • ORA-01554: simultaneity of transaction limit reached the reason

    Hi team,

    I am getting below error

    ORA-01554: transaction concurrency limit violation reason: no cancellation segment found housing available params: 0, 0

    Google hit says: Action: stop the system, change the INIT. Operations ORA parameters, rollback_segments or rollback_segments_required, then re-start. _


    I'm on 11 GR 2 with oracle enterprise linux. My init.ora settings are as below

    SQL > show Cancel parameter;

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    UNDO_MANAGEMENT string AUTO
    UNDO_RETENTION integer 900
    undo_tablespace string UNDOTBS4
    SQL > see the restoration of the parameter;

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    fast_start_parallel_rollback string LOW
    rollback_segments chain
    transactions_per_rollback_segment integer 5
    SQL > see the transaction of the parameter;

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    whole of transactions 8289
    transactions_per_rollback_segment integer 5


    Any suggestion to avoid this error, without bouncing the database?
    Please advice

    Thank you

    Published by: user12096071 on July 26, 2011 15:14

    Queries:

    SELECT COUNT(*) FROM V$TRANSACTION ;
    SELECT STATUS, COUNT(*) FROM DBA_ROLLBACK_SEGS GROUP BY STATUS ORDER BY STATUS;
    

    should show the number you of transactions and rollback segments how you have active. Unfortunately, these queries must be performed immediately (or shortly after) when the error occurs - if not, some transactions that were completed or some segments of cancellation that went offline may change the image.

    Hemant K Collette

    Published by: Hemant K grapple on July 27, 2011 10:23

  • CFThread GC Overhead limit exceeded

    I have an application that trades virtual items and have a single page that gets all my accounts and for each, we create a thread that initially connects the account and then search and buy items for as long as the session is active.  I must point out at this stage that this is my first experience of using cfthread.

    I have problems with it. Every 30 minutes (if not less) my ColdFusion server stops and I have to restart the service. After restarting the service, I check the logs and there are errors that say "GC Overhead limit exceeded".

    I looked online at length but as far as cfthread is new to me, this is the JAVA virtual machine and how it works. I'm running on CF10 Enterprise Edition and loaded the server monitor and enough surely, I can see the JVM memory usage grow and develop until the limit has been reached (at the time I put it only 2 GB because when I had it set up the memory seemed fills up faster). Even when I select the option run GC in the monitor does not reduce the memory usage a lot, or not at all.

    It's more than likely something to do with my code? For the moment, I have a little less than 50 threads are created, but I added multiple accounts on the application then extra threads that will be needed.

    Here is the code of the page...

    < script >

    / * RELOAD THE PAGE EVERY 65 MINUTES * /.

    {setTimeout (function ()}

    Window.Location.Reload (1);

    (}, 3900000);

    < /script >

    <! - MOVE ACCOUNTS - >

    < cfquery name = "getLogins" datasource = "myDB" >

    SELECT * FROM Logins WHERE active = 1

    < / cfquery >

    <! - LOOP ACCOUNT - >

    < cfloop query = "getLogins" >

    <!-HAVE a SLEEP SO IP is not MARKED FOR SENDING TOO MANY REQUESTS AT the SAME TIME->

    < cfset Sleep (30000) / >

    <!--CREATE THREAD FOR ACCOUNT-->

    < cfthread

    Name = "" #getLogins.AccountName # ""

    action = "run".

    accountName = "#Trim (getLogins.accountName)" # ""

    email = "#Trim (getLogins.email)" # ""

    Password = "(getLogins.Password) #Trim" # ""

    resourceId = "#Trim (getLogins.resourceID) #" >

    <!--> DEFAULT SESSION variables

    < cfset SESSION ["#attributes.accountName #LoggedIn"] = 0 / >

    < cfset SESSION ["#attributes.accountName #LoginAttempts"] = 0 / >

    <!-ACCOUNT NOT CONNECTED AND LESS THAN 8 CONNECTION ATTEMPTS MADE->

    < cfscript >

    While (SESSION [' #attributes.accountName #LoggedIn'] EQ 0 AND ' SESSION [' #attributes.accountName #LoginAttempts] LT 8) {}

    CONNECTION ATTEMPT

    THREAD.logInAccount = Application.cfcs.Login.logInAccount (attributes.email, attributes.password);

    IF LOGIN ATTEMPT

    If {(EQ THREAD.logInAccount 0)}

    INCREASE THE NUMBER OF ATTEMPT

    SESSION [' #Attributes.AccountName #LoginAttempts'] = ' SESSION [' #attributes.accountName #LoginAttempts] + 1;

    }

    ELSE IF RETURNED IS 481 THEN THE ACCOUNT IS LOCKED

    ElseIf (THREAD.logInAccount EQ 481) {}

    SET COUNT OF CONNECTION ATTEMPT TO STOP THE LOOP

    SESSION [' #Attributes.AccountName #LoginAttempts'] = 8;

    UPDATE ACCOUNT TO MARK AS LOCKED

    THREAD.updLogin = Application.cfcs.Login.updLogin (attributes.email);

    }

    }

    < / cfscript >

    <!-If CONNECTED-> ACCOUNT

    < cfif SESSION [' #attributes.accountName #LoggedIn'] EQ 1 >

    <! - SET ID FOR RESEARCH - >

    < cfset THREAD.definitionID = attributes.resourceID - 1610612736 / >

    <!-CONNECTED-> ACCOUNT

    < cfloop condition = "SESSION [' #attributes.accountName #LoggedIn'] is equal to 1" >

    <! - GET LATEST more LOW BUY it NOW PRICE - >

    < name = "Cfquery THREAD.getMinBIN" datasource = "WAS" cachedWithin ="#CreateTimeSpan (0,0,1,0) #" > "

    SELECT TOP 1 * FROM v_FUT14BINPrices WHERE resourceID = #attributes.resourceId # ORDER BY DESC lastUpdated

    < / cfquery >

    <!--INCLUDE FILE THAT CALCULATES the PURCHASE AND SELLING PRICE-->

    < cfinclude template = "sellingPrices.cfm" / > "

    <! - IF the PRICE of the tender has BEEN SET - >

    < StructKeyExists (THREAD, "biddingPrice") cfif >

    <!--> SEARCH MAKE, REQUEST APPLICATION

    < cfset THREAD.requestStart = GetTickCount() / >

    < cfset THREAD.search = Application.cfcs.Search.dosearchOld(attributes.resourceId,THREAD.biddingPrice,0) / >

    < cfset THREAD.requestDuration = GetTickCount() - THREAD.requestStart / >

    <! - IF the SEARCH CONTAINS the CONTENTS OF THE FILES - >

    < StructKeyExists (THREAD.search, "FileContent") cfif >

    <! - STATE the NUMBER OF RESULTS VARIABLE - >

    < cfset THREAD.numResults = 0 / >

    <!-If RETURNED-> JSON

    < IsJSON (THREAD.search.FileContent) cfif >

    <! - DESERIALIZE JSON - >

    < cfset THREAD.searchResults = DeserializeJSON (THREAD.search.FileContent) / >

    <! - IF PLAYER SEARCH RETURNS STRUCT AUCTIONINFO - >

    < StructKeyExists (THREAD.searchResults, "auctionInfo") cfif >

    <! - SET NUMBER OF CARDS RETURNED BY SEARCH - >

    < cfset THREAD.numResults = ArrayLen (THREAD.searchResults.auctionInfo) / >

    < cfset THREAD.statusCode = "Successful" / >

    < cfif THREAD.numResults EQ 0 >

    < cfset THREAD.statusCode = "Successful - no. Results" / >

    < / cfif >

    <!- OTHERWISE, if the CODE of ERROR RETURNED-->

    < cfelseif StructKeyExists (THREAD.searchResults, "code") >

    < cfset THREAD.statusCode = THREAD.searchResults.code / >

    <!-IF CODE 401 THEN SESSION has EXPIRED->

    < Cfif THREAD.statusCode EQ 401 >

    <!-SESSION as it is DISCONNECTED from the VALUE AND TRY SESSION REFRESH->

    < cfset SESSION [' #attributes.accountName #LoggedIn'] = 0 / >

    < cfset THREAD.logInAccount = Application.cfcs.Login.logInAccount (attributes.email, attributes.password) / >

    < / cfif >

    <! - ANOTHER THING HAPPENED - >

    < cfelse >

    < cfset THREAD.statusCode = "Something Else" - & THREAD.searchResults.code / >

    < / cfif >

    <!-If the RESULTS RETURNED--->

    < GT 0 cfif THREAD.numResults >

    <!-LOOP AROUND OF RESULTS AND CHECK if the CRITERIA of PURCHASE of GAME - >

    < cfloop index = "i" = "1" to = "#THREAD.numResults #" >

    <!-* SAFETY CHECK * - make sure THAT the ID OF the CURRENT MAP IS the SAME AS for a RESEARCH - >

    < cfif THREAD.searchResults.auctionInfo [i].itemData.resourceID EQ attributes.resourceId AND THREAD.getMinBIN.resourceID EQ attributes.resourceId >

    <! - ENSURE BIN PRICE TOGETHER AND IS LESS than the SET PURCHASE PRICE - >

    < cfif THREAD.searchResults.auctionInfo [i] .buyNowPrice GT AND THREAD.searchResults.auctionInfo [i] .buyNowPrice LTE THREAD.biddingPrice 0 >

    <!-GAME-> END of the auction TIME

    < cfset THREAD.timeLeft = THREAD.searchResults.auctionInfo [i] expires / >

    < cfset THREAD.auctionEnds = DateAdd ("s", THREAD.timeLeft, Now ()) / >

    <! - BUY CARD - >

    < cfset THREAD.buyCard = Application.cfcs.Bid.doBIN (THREAD.searchResults.auctionInfo [i] .tradeID, THREAD.searchResul ts.auctionInfo [i] .buyNowPrice, THREAD.searchResults.auctionInfo [i] .startingBid, THREAD.searc hResults.auctionInfo [i].itemData.ID, THREAD.searchResults.auctionInfo [i].itemData.resourceI D, THREAD.startPrice, THREAD.binPrice, THREAD.lowestBIN, THREAD.searchResults.auctionInfo [i] .i temData.discardValue, THREAD.auctionEnds, THREAD.requestStart, THREAD.requestDuration) / >

    < / cfif >

    < / cfif >

    < / cfloop >

    < / cfif >

    < cfelse >

    < cfset THREAD.statusCode = THREAD.search.FileContent / >

    < / cfif >

    < cfset THREAD.sleepDuration = 1000 - THREAD.requestDuration / >

    < cfif THREAD.sleepDuration GT 0 > < cfset Sleep (THREAD.sleepDuration) / > < / cfif >

    < / cfif >

    <!--> INSERT RECORD SEARCH

    < cfset THREAD.insSearchRecord = Application.cfcs.Search.insSearchRecord (THREAD.definitionID, THREAD.statusCode, THREAD.requ, estDuration, THREAD.numResults, THREAD.biddingPrice) / >

    < / cfif >

    < / cfloop >

    < / cfif >

    < / cfthread >

    < / cfloop >

    I would have thought that memory would have stayed around the use as well as each loop performs the same set of actions so once the loop has returned at the beginning then I thought the previous loop would have been removed from memory (free space) and then the same actions would be made so the same total memory use then upwards , but it almost seems as if each loop is kept in memory and it is therefore more and more.

    Could someone please help me and offer some advice on how I could fix this problem? If you need more info, let me know

    Thanks in advance

    JVM looks like this thanks gcviewer:

    The diagram does not show the PermGen details. The tail of the log says:

    Total PSPermGen 1048576K, used space K 80172 object 1048576K, 7% used.

    So that you have something to do with the CFM code, for the time being to maintain the system, you may do better to make a few adjustments of the JAVA virtual machine. Remember, that this is probably not a fix for the problem in his together more try this to see if the system remains standing while you continue to work on other issues CFM.

    I'm waiting for the JVM details are like that:

    Minimum size: 1024
    Maximum segment size: 2048
    -XX: MaxPermSize = 1024 m

    The entire system is:

    CF10 on one machine Windows 7 x 64 with an Intel core i3-2100 processor and 8 GB of RAM.

    Not seeing is not a lot of objects is kept in PermGen you can do which is smaller. Bunch of double size and set a value for the new part of the bunch. FOR EXAMPLE:

    Minimum size: 2048
    Maximum segment size: 4096
    -XX: MaxPermSize = 324 m
    -Xmn256m

    Or that way if you prefer in CFadmin:

    -serveur - Xmn256m - XX : MaxPermSize = 324 m - XX : PermSize = 192 m - XX : + UseParallelGC - Xbatch-Dcoldfusion.home={application.home}-Dcoldfusion.rootDir={application.home}-Dcoldfusion.libPath={application.home}/lib-Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true-Dcoldfusion.jsafe.defaultalgo=FIPS186Random - XX : + PrintGCDetails - XX : + PrintGCTimeStamps - XX : + PrintHeapAtGC-verbose : gc-Xloggc:cfjvmGC.log

    Or it by editing the JVM. CONFIG:

    -serveur - Xms2048m-Xmx4096m-Xmn256m - XX : MaxPermSize = 324 m - XX : PermSize = 192 m - XX : + UseParallelGC - Xbatch-Dcoldfusion.home={application.home}-Dcoldfusion.rootDir={application.home}-Dcoldfusion.libPath={application.home}/lib-Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true-Dcoldfusion.jsafe.defaultalgo=FIPS186Random - XX : + PrintGCDetails - XX : + PrintGCTimeStamps - XX : + PrintHeapAtGC-verbose : gc-Xloggc:cfjvmGC.log

    Because now, it would be wise to put the JAVA virtual machine to run the full garbage collection every 10 minutes, but I post I'm undecided if will show even more thoughts on that if she seems more like an idea of good dressing for me. I think stay with UseParallelGC because it tends to keep the evacuated heap rather than change the garbage collector to something that tends to maintain the objects in memory.

    Java is 7 so no 6 EOL tho just to say that 7u15 is old with current 7u67. Java 8 is also communicated however not substantiate the claim of Adobe with execution CF10 on Java 8. Given that you do not use a new garbage collection as G1GC algorithim (and for now, I do not recommend do you) I think stay with 7u15 for now.

    HTH, Carl.

  • Is it possible to set a daily limit of data on a Linksys router?

    Well, I would like to give a little back-story.

    Recently me and those who live with me moved to a rather remote little house in the desert. Thus, our internet options are limited. Our only options were DSL which was 'only slightly faster than dial-up' (this is exactly what we were told) or satellite (consider even not dial-up).

    After a few months to decide we finally chose the best Apple rotten on the bucket of apples rot and got hughesnet. Of course, we are forced into a contract a year and that all businesses.

    -All right, let's not waste any more time and get to the meat of my post.

    So, we have a total limit of data. (up to 500 MB) but some of my roommates either forget to check their data use or don't care enough to keep track. As a result, I get on the computer to see the limits to a very low percentage of the total or completely used. And Hughesnet recently limiting us hard when the limit has been reached. So much so, that we can not still go but a web page at a time. Assuming that a person uses internet.

    So now that you know the problem and here's what I want to know:

    Is it possible to limit the use of the internet by IP or connection to the FREE router (they are really paranoid and some of them won't let me even to find their MAC address) to, say, 100 MB used in total in a day and then completely cutting their overnight Internet? (our limit of data gets updated daily) I don't care if it's a third-party software or not. I spoke with them on this issue several times and they all agree to only by using a certain amount and then use this amount of anyway.

    There is also a time during the night that the use of the internet is is more limited and you get a free internet connection until it ends. So I also need to know how to unlock this ceiling at a specific time and then have the summary of Cape once more later.

    (if this is not possible, don't worry, I can do it manually as I tend to be on the computer during this time. It would just be nice to have it done automatically.)

    Finally, if this is possible and ONLY as a bonus, is it possible to let them on the internet after the full 500 mbs have been used (or almost) so that they can at least enjoy what remains little internet after this point? It would be great to have, but not at all necessary.

    So, any ideas?

    The feature you are looking for is not available in the linksys/Cisco router... You cannot restrict Internet service after certain limit...

  • My recovey limit is full is there a way to delete parts of which allows new recovery dates?

    My recovery limit has been reached. Is there a way to delete the old information to make room for the most recent date of recovery?

    Hi SharonHarlow,

    1 collection are. what ceiling you referring?
    2. What is the exact error message you get?

    If you are referring to the deletion of restore points, then I suggest you to try the steps from the following link:

    Delete a restore point
    http://Windows.Microsoft.com/en-us/Windows-Vista/delete-a-restore-point

  • Strange behavior of index sous-partitionnée

    Hi all!

    I am facing a strange problem on my database.

    I have a huge table that is partitioned by year and sous-partitionnée per month and per day. For this table, we have 13 indices that is partitioned and sous-partitionnée.

    For a given day, the July 13 subpartition index segment is missing, but when I ask DBA_IND_SUBPARTITIONS date index is here:

    1. Select index_name, subpartition_name, status

    dba_ind_subpartitions 2 c

    3 where subpartition_name = 'P_ANO_04_JUL_13. '

    4 * and index_name like '% FATINDFIXOL % '.

    SQL > /.


    INDEX_NAME SUBPARTITION_NAME STATUS

    -------------------------------------------------- ------------------------------ --------

    IDX_FATINDFIXOL_ID_TERMINAL_C USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_PARTE_TARIFAD USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_CSP USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_FDS USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_CATEGORIA USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_FLAG_PORTAB USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOLIDTERMORIGPORT USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOLIDTERMDESTPORT USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_ORIGEM USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_DESTINO USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_ID_CARACT_FIX USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_ROTA_ENTRADA USABLE P_ANO_04_JUL_13

    IDX_FATINDFIXOL_ROTA_SAIDA USABLE P_ANO_04_JUL_13

    SQL > select nom_segment, nom_partition, bytes/1024/1024 MB

    2 from dba_segments

    3 where nom_partition = 'P_ANO_04_JUL_13. '

    4 and nom_segment like '% FATINDFIXOL % '.

    5.

    no selected line

    Showing another day, July 14:

    SQL > select index_name, subpartition_name, status

    user_ind_subpartitions 2 c

    3. inner join WHERE USER_SEGMENTS one ON (A.PARTITION_NAME = C.SUBPARTITION_NAME AND nom_segment = C.INDEX_NAME)

    4 where subpartition_name = 'P_ANO_04_JUL_14. '

    5 and index_name like '% FATINDFIXOL % '.

    6.

    INDEX_NAME SUBPARTITION_NAME STATUS

    -------------------------------------------------- ------------------------------ --------

    IDX_FATINDFIXOLIDTERMDESTPORT UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOLIDTERMORIGPORT UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_CATEGORIA UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_CSP USABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_DESTINO UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_FDS UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_FLAG_PORTAB UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_ID_CARACT_FIX UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_ID_TERMINAL_C UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_ORIGEM UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_PARTE_TARIFAD UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_ROTA_ENTRADA UNUSABLE P_ANO_04_JUL_14

    IDX_FATINDFIXOL_ROTA_SAIDA UNUSABLE P_ANO_04_JUL_14

    1 Select nom_segment, nom_partition, bytes/1024/1024 MB

    2 from dba_segments

    3 where nom_partition = 'P_ANO_04_JUL_14. '

    4 * and nom_segment like '% FATINDFIXOL % '.

    SQL > /.

    NOM_SEGMENT NOM_PARTITION MB

    ------------------------------ ------------------------------ ----------

    IDX_FATINDFIXOL_ROTA_ENTRADA P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_ROTA_SAIDA P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_CATEGORIA P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_FLAG_PORTAB P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_ORIGEM P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_DESTINO P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_ID_CARACT_FIX P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_ID_TERMINAL_C P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_PARTE_TARIFAD P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_CSP P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOL_FDS P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOLIDTERMORIGPORT P_ANO_04_JUL_14, 0625

    IDX_FATINDFIXOLIDTERMDESTPORT P_ANO_04_JUL_14, 0625

    Also, if I run a query using any index and generate a plan to explain it, it shows that the index is used:

    13 July:

    SQL > select the CATEGORY of USRCARGADB.fato_indice_fixa_online (p_ano_04_jul_13) subpartition where category = 'ASDF ';

    no selected line

    Elapsed time: 00:00:00.01

    Execution plan

    ----------------------------------------------------------

    Hash value of plan: 690609349

    -------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name                      | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |

    -------------------------------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                           |     1.     3.     1 (0) | 00:00:01 |       |       |

    |   1.  COMBINATION ITERATOR PARTITION |                           |     1.     3.     1 (0) | 00:00:01 |   KEY |   KEY |

    |*  2 |   INDEX RANGE SCAN | IDX_FATINDFIXOL_CATEGORIA |     1.     3.     1 (0) | 00:00:01 |  1315 |  1315 |

    -------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2 - access ("CATEGORIA" = 'ASDF')

    11 July:

    SQL > select the CATEGORY of USRCARGADB.fato_indice_fixa_online (p_ano_04_jul_11) subpartition where category = 'ASDF ';

    no selected line

    Elapsed time: 00:00:00.03

    Execution plan

    ----------------------------------------------------------

    Hash value of plan: 690609349

    -------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name                      | Lines | Bytes | Cost (% CPU). Time | Pstart. Pstop |

    -------------------------------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                           |     1.     3.     3 (0) | 00:00:01 |       |       |

    |   1.  COMBINATION ITERATOR PARTITION |                           |     1.     3.     3 (0) | 00:00:01 |   KEY |   KEY |

    |*  2 |   INDEX RANGE SCAN | IDX_FATINDFIXOL_CATEGORIA |     1.     3.     3 (0) | 00:00:01 |  1313 |  1313 |

    -------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    2 - access ("CATEGORIA" = 'ASDF')

    Anyone know what might happened to this segment of the specific index for July 13? If I try to rebuild this subpartition or make it UNUSABLE, it works, but still does not show a thing on the DBA_SEGMENTS or DBA_EXTENTS. The version is 11.2.0.3

    Thank you

    Post edited by: rafa.aborges

    You have not yet provided your Oracle version 4-digit or the TABLE and the INDEX of DDL.

    Your 'problem' might be that creating DEFERRED SEGMENT where Oracle is not allocate space until the first data are added to this segment.

    See 'Understanding delayed Segment creation' in the DBA Guide

    http://docs.Oracle.com/CD/E11882_01/server.112/e25494/tables002.htm#CHDGJAGB

    >

    Understand the creation of Segment delayed

    Starting with the Oracle 11 g Release 2 database, when you create tables of heap-organized in a locally managed tablespace data base differs from creating segment table until the first row is inserted.

    In addition, the creation of segment is deferred for all columns of the table LOB, all indexes created implicitly as part of the creation of the table and all indexes created later explicitly on the table.

    Note:

    In the 11.2.0.1 release, delayed segment creation is not supported for partitioned tables. This restriction is removed in the 11.2.0.2 version and later.

    >

    It is MANDATORY that posters provide their versions of Oracle 4-digit. There are too many features that depend on the version used.

  • ACTIVATION LIMIT

    I buy a new top office and when I tried to install Acrobat Pro, it tells me that my activation limit has been reached.

    HI MLIM

    A serial key allows you to install the product on two machines.

    Please contact the Adobe Support and they will help you.

    Please call the: 1-800-833-6687

    or

    We also invite you to contact our support at http://adobe.ly/yxj0t6 team

Maybe you are looking for