PL/SQL code to add to the data file

Can someone help me with a PL/SQL code. I want to do the following:

get the latest data from the tablespace file
If (value > 30 GB)
then

Loop
change the database add datafile '+ DATA01' size 30 GB
# Note data file cannot be larger than 30 GB
# IE if 40 GB is entered as 2 entries are created one for 30 GB
# the second for 10 GB
end loop
on the other
same logic adds size datafile up to 30 GB
Loop
If you go over 30 GB that to create the new data file
End of loop
If

Please excuse the syntax I know its not correct.  In summary,.
what I want to do is to create data no larger than 30 GB files for the
extra space simply create new datafiles Nigerian, we reached the "value in".
limit

He can be hard-code "«+ DATA01»»

Note, I don't want to use datafile autoextend I want to control the size of my storage space...

Any code would be greatly apprecuated.

Thanks to all those who responded

create or replace
procedure add_datafile(
  p_tablespace varchar2,
  p_size_in_gibabytes number
) is
  space_required  number := 0;
  space_created   number := 0;
  file_max_size   number := 30;
  last_file_size  number := 0;
begin
  for ts in (
    select
      tablespace_name,
      round(sum(bytes / 1024 / 1024 / 1024)) current_gigabytes
    from dba_data_files
    where tablespace_name = upper(p_tablespace)
    group by tablespace_name
  ) loop
    dbms_output.put_line('-- current size of ' || ts.tablespace_name || ' is ' ||  ts.current_gigabytes  || 'G');
    space_required := p_size_in_gibabytes - ts.current_gigabytes;
    dbms_output.put_line('-- adding files ' || space_required || 'G up to ' || p_size_in_gibabytes || 'G with files max ' || file_max_size || 'G');
    last_file_size := mod(space_required, file_max_size);
    while space_created < (space_required - last_file_size)
    loop
      dbms_output.put_line('alter tablespace ' || ts.tablespace_name || q'" add datafile '+DATA01' size "' || file_max_size || 'G;');
      space_created := space_created + file_max_size;
    end loop;
    if space_created < space_required then
      dbms_output.put_line('alter tablespace ' || ts.tablespace_name || q'" add datafile '+DATA01' size "' || last_file_size || 'G;');
    end if;
  end loop;
end;
/  

set serveroutput on size unlimited
exec add_datafile('sysaux', 65);  

PROCEDURE ADD_DATAFILE compiled
anonymous block completed
-- current size of SYSAUX is 1G
-- adding files 64G up to 65G with files max 30G
alter tablespace SYSAUX add datafile '+DATA01' size 30G;
alter tablespace SYSAUX add datafile '+DATA01' size 30G;
alter tablespace SYSAUX add datafile '+DATA01' size 4G;

Tags: Database

Similar Questions

  • How do you add all the data for iPad2 iPad mini III

    I want to add all the data of an iPad2 an Mini iPad without deleting all the data on the iPad Mini in order to bring the iPad 2 to the default settings and then give my grandson?

    1 transfer content from an iPhone, iPad or iPod touch to a new device - Apple Support

    2. what to do before you sell or give away your iPhone, iPad or iPod touch - Apple Support

  • When OMF add the data file in the tablespace

    Hi friends,

    We use oracle 11.2 with CMS (oracle managed file) for the database on the linux platform. the parameter - DB_CREATE_FILE_DEST had been set. at the present time, we received a message from global warming this tablespace criterion is 85% full.

    According to the document of the oracle, OMF auto will add a data file in the tablespace tast. more than a week, I do not see any data file has been added by OMF.

    I want to know when OMF adds the data file in the tablespace? 85% or 95% or some (parameter) setting this action?

    Thank you

    newdba

    OMF does not automatically add a new data file.  You must explicitly add a new data file with the ALTER TABLESPACE ADD DATAFILE command tbsname.

    What is OMF is to provide a Unique name and a default size (100 MB) for the data file.  That is why the ALTER TABLESPACE... Command ADD DATAFILE that you run didn't need to specify the file size or file name.

    Hemant K Collette

  • Command SQL PLUS to display the size of the data file

    How can I know my tables disk space after insertion of data

    Thank you

    Hello

    Did you do the space occupied by the tables in the database.
    If Yes

    select sum(bytes)/1024/1024, segment_name from dba_segments group by segment_name
    or
    select sum(bytes)/1024/1024 from dba_segments where segment_name = 'Table_name'
    

    Or have you served the space occupied by the data files

    select bytes/1024/1024, file_name from dba_data_files;
    

    Concerning
    Anurag

  • When loading, error: field in the data file exceeds the maximum length

    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

    PL/SQL Release 11.2.0.3.0 - Production

    CORE Production 11.2.0.3.0

    AMT for Solaris: 11.2.0.3.0 - Production Version

    NLSRTL Version 11.2.0.3.0 - Production

    I am trying to load a table, small size (110 lines, 6 columns).  One of the columns, called NOTES is less error when I run the load.  That is to say that the size of the column exceeds the limit max.  As you can see here, the column of the table is equal to 4000 bytes)

    CREATE TABLE NRIS. NRN_REPORT_NOTES

    (

    Sys_guid() NOTES_CN VARCHAR2 (40 BYTE) DEFAULT is NOT NULL.

    REPORT_GROUP VARCHAR2 (100 BYTE) NOT NULL,

    POSTCODE VARCHAR2 (50 BYTE) NOT NULL,

    ROUND NUMBER (3) NOT NULL,

    VARCHAR2 (4000 BYTE) NOTES,

    LAST_UPDATE TIMESTAMP (6) WITH ZONE SCHEDULE systimestamp NOT NULL default

    )

    TABLESPACE USERS

    RESULT_CACHE (DEFAULT MODE)

    PCTUSED 0

    PCTFREE 10

    INITRANS 1

    MAXTRANS 255

    STORAGE)

    80K INITIAL

    ACCORDING TO 1 M

    MINEXTENTS 1

    MAXEXTENTS UNLIMITED

    PCTINCREASE 0

    DEFAULT USER_TABLES

    DEFAULT FLASH_CACHE

    DEFAULT CELL_FLASH_CACHE

    )

    LOGGING

    NOCOMPRESS

    NOCACHE

    NOPARALLEL

    MONITORING;

    I did a little investigating, and it does not match.

    When I run

    Select max (lengthb (notes)) in NRIS. NRN_REPORT_NOTES

    I got a return of

    643

    .

    Which tells me that the larger size of this column is only 643 bytes.  But EACH insert is a failure.

    Here is the header of the file loader and first couple of inserts:

    DOWNLOAD THE DATA

    INFILE *.

    BADFILE '. / NRIS. NRN_REPORT_NOTES. BAD'

    DISCARDFILE '. / NRIS. NRN_REPORT_NOTES. DSC"

    ADD IN THE NRIS TABLE. NRN_REPORT_NOTES

    Fields ended by '; '. Eventually framed by ' |'

    (

    NOTES_CN,

    REPORT_GROUP,

    Zip code

    ALL ABOUT NULLIF (R = 'NULL'),

    NOTES,

    LAST_UPDATE TIMESTAMP WITH TIME ZONE ' MM/DD/YYYY HH24:MI:SS. FF9 TZR' NULLIF (LAST_UPDATE = 'NULL')

    )

    BEGINDATA

    | E2ACF256F01F46A7E0440003BA0F14C2; | | DEMOGRAPHIC DATA |; A01003; | 3 ; | demographic results show that 46% of visits are made by women.  Among racial and ethnic minorities, the most often encountered are native American (4%) and Hispanic / Latino (2%).  The breakdown by age shows that the Bitterroot has a relatively low of children under 16 (14%) proportion in the population of visit.  People over 60 represent about 22% of visits.   Most of the visitation comes from the region.  More than 85% of the visits come from people who live within 50 miles. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    | E2ACF256F02046A7E0440003BA0F14C2; | | DESCRIPTION OF THE VISIT; | | A01003; | 3 ; | most visits to the Bitterroot are relatively short.  More than half of the visits last less than 3 hours.  The median duration of visiting sites for the night is about 43 hours, or about 2 days.  The average Wilderness visit lasts only about 6 hours, although more than half of these visits are shorter than the duration of 3 hours.   Most of the visits come from people who are frequent visitors.  Over thirty percent are made by people who visit between 40 and 100 times a year.  Another 8% of visits from people who say they visit more than 100 times a year. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    | E2ACF256F02146A7E0440003BA0F14C2; | | ACTIVITIES |. A01003; | 3 ; | most often reported the main activity is hiking (42%), followed by alpine skiing (12%) and hunting (8%).  More than half of the report visits participating in the relaxation and the display landscape. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    Here's the full start of log loader, ending after the return of the first row.  (They ALL say the same error)

    SQL * Loader: Release 10.2.0.4.0 - Production Thu Aug 22 12:09:07 2013

    Copyright (c) 1982, 2007, Oracle.  All rights reserved.

    Control file: NRIS. NRN_REPORT_NOTES. CTL

    Data file: NRIS. NRN_REPORT_NOTES. CTL

    Bad File:. / NRIS. NRN_REPORT_NOTES. BAD

    Discard File:. / NRIS. NRN_REPORT_NOTES. DSC

    (Allow all releases)

    Number of loading: ALL

    Number of jump: 0

    Authorized errors: 50

    Link table: 64 lines, maximum of 256000 bytes

    Continuation of the debate: none is specified

    Path used: classics

    NRIS table. NRN_REPORT_NOTES, loaded from every logical record.

    Insert the option in effect for this table: APPEND

    Column Position Len term Encl. Datatype name

    ------------------------------ ---------- ----- ---- ---- ---------------------

    FIRST NOTES_CN *;  O (|) CHARACTER

    REPORT_GROUP NEXT *;  O (|) CHARACTER

    AREA CODE FOLLOWING *;  O (|) CHARACTER

    ROUND                                NEXT     *   ;  O (|) CHARACTER

    NULL if r = 0X4e554c4c ('NULL' character)

    NOTES                                NEXT     *   ;  O (|) CHARACTER

    LAST_UPDATE NEXT *;  O (|) DATETIME MM/DD/YYYY HH24:MI:SS. FF9 TZR

    NULL if LAST_UPDATE = 0X4e554c4c ('NULL' character)

    Sheet 1: Rejected - error in NRIS table. NRN_REPORT_NOTES, information ABOUT the column.

    Field in the data file exceeds the maximum length.

    I don't see why this should be failed.

    Hello

    the problem is bounded by default, char (255) data... Very useful, I know...

    you need two, IE sqlldr Hat data is longer than this.

    so change notes to notes char (4000) you control file and it should work.

    see you soon,

    Harry

  • After you rename the data file, cannot start database

    Oracle 11 g 2
    OEL 5 (ASM)
    Network infrastructure (cluster install) - no CARS yet

    Something interesting happened. Perhaps this question might be more suited to the "ASM" section, but that's.

    I gave a data in ASM file an alias with the following command (the ASM instance):
    SQL> alter diskgroup DATA add alias '+DATA/oradb/datafile/users01.dbf' for '+DATA/oradb/datafile/users.253.795630487';
    
    Diskgroup altered.
    Then, as mentioned in Note: 564993.1, we need to update the database as well with the new alias. However, when I went to the stopping and starting of the database, I received the following:
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount;
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    It's strange. Nothing has changed except the alias being added to the data file. This is all before operation. That's happened?

    I think that it may have to do with the different ASMOPER, ASMADMIN, ASMDBA groups that have been installed as part of the installation of the grid Infrastructure. In addition, the listener is running out of the GI home.

    All my variables environment (e.g. ORACLE_SID, ORACLE_HOME, etc.) are defined.

    Any ideas?

    Thank you.

    Hello
    When you connect with tnsnames alias while the database is down the listner did not know the service, so you must start the database on the server database with sqlplus / as sysdba. After starting the registry of data base with the listener and your connection works again.
    Alternativ you can add a service to your listener.ora to connect with him while he is arrested, as long as you do not have it you must use local connect which means sqlplus / as sysdba.

    concerning
    Peter

  • I CAN QUERY A TABLE EVEN AFTER THE DELETION OF THE DATA FILE

    Hello

    Can someone explain to me the reason why I am able to interview some tables even after the deletion of the data that are associated with file?

    SQL > select table_name, tablespace_name from dba_tables where owner = 'SCOTT ';

    TABLE_NAME, TABLESPACE_NAME

    ------------------------------ ------------------------------

    TEST2 USERS

    TEST USERS

    SALGRADE USERS

    USERS OF BONUS

    USERS OF THE EMP

    USERS OF DEPT

    6 selected lines.

    SQL > exit

    Disconnected from the database to Oracle 11 g Enterprise Edition Release 11.2.0.1.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    [oracle@localhost orcl] $ rm /app/oracle/oradata/orcl/users01.dbf

    [oracle@localhost orcl] $ sqlplus scott/scott

    SQL * more: Production version 11.2.0.1.0 on Mon Mar 30 21:35:54 2015

    Copyright (c) 1982, 2009, Oracle.  All rights reserved.

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    SQL > select count (*) from test2;

    Select count (*) from test2

    *

    ERROR on line 1:

    ORA-01116: error opening the database file 4

    ORA-01110: data file 4: ' / app/oracle/oradata/orcl/users01.dbf'

    ORA-27041: could not open the file

    Linux error: 2: no such file or directory

    Additional information: 3

    SQL > select count (*) of the test;

    COUNT (*)

    ----------

    5000

    SQL >

    The first output is as expected. But why am I still able to query the table of test, even if the data file has been deleted.

    Hello

    The process of database have a file handle for the data file - this remains even when the file is deleted (it disappears from the normal file system navigation)

    You can see if you have lsof installed

    just try

    lsof | grep datafile_name

    Once the database is restarted and the released file handle so you will not be able to do this any more - and in fact you will get errors when it can't find the file.

    See you soon,.

    Rich

  • RMAN issues - no backup or copy of the data file found

    Oracle 11 g 2

    Linux RHEL 6.5

    I inherited a database backup and restore question since the DBA is OoO.

    Here is the script used for the backup:

    Configure default device the disk type;
    Configure controlfile autobackup on;
    Configure controlfile autobackup peripheral type disc format in ' / u01/app/oracle/bkp/controlfile/%F.ctl';
    Configure retention policy to recovery of 30-day window;
    View all;
    Run {}
    stop immediately;
    bootable media;
    allocate channel dup1 device type disk;
    allocate channel dup2 device type disk;
    SQL "create pfile =" /u01/app/oracle/bkp/pfile/initpfile.ora "of spfile;
    backup format ' / u01/app/oracle/bkp/cold_db/cold_bkp_%U' database;
    output channel dup1;
    output channel dup2;
    ALTER database open;
    }

    When I try the following restore script:

    run
    {
    Start pfile='/u01/app/oracle/bkp/pfile/initpfile.ora' nomount;
    Restore controlfile to ' / u01/app/oracle/bkp/controlfile/c-123131414-20140509-00.ctl';
    change the editing of the database;
    restore the database;
    ALTER database open resetlogs;
    }

    I get error RMAN-06023: no backup or copy of the data file found

    I'm trying to restore a database backup from 5 days ago and I use this backup control file.

    I'll close this discussion and continue to involve the Oracle.  Thank you all for your help.

  • You cannot change the data file

    Unable to Alter Database Datafile

    Hi all

    Using Alter database datafile ' C:\Oracle\APP\ORADATA\...\USERS01. DBF' SIZE 100M;

    getting this error.

    SQL > Alter database datafile ' C:\Oracle\APP\ORADATA\...\USERS01. DBF' SIZE 100M;
    Error report:
    SQL error: ORA-01237: do not extend datafile 4

    ORA-01110: data file 4: ' C:\Oracle\APP\ORADATA\...\USERS01. DBF'

    ORA-27059: reduce file size

    OSD-04005: SetFilterPointer() failure, unable to read the file

    s/o-error: (OS 112) there is not enough space on the disk.

    01237 00000 - "cannot extend %s datafile.

    * cause: year Operating system error has occurred during resizing.

    * Action: Resolve the cause of the error of operating system and start the command

    I understand that the OS space is full and I guess this isn't specific Oracle error, but certainly an OS level error, someone could suggest me how to deal with the erasure of space, is it possible to erase and reuse the data files? Please suggest.

    Help out me

    Thank you and best regards,
    Cabbage

    You need create more space by deleting unnecessary files or narrowing of the files.

  • Field in the data file exceeds the maximum length - CTL file error

    Hello

    I load data into the new system using the CTL file. But I get the error message 'field in the data file exceeds the maximum length "for few records, other records are processed successfully." " I checked the length of the error record in the extracted file, it is less than the length of the target table, VARCHAR2 (2000 bytes). Here is an example of error data,


    Hi Rebecca ~ I just talk to our Finance Department and they agreed that ABC payments can be allocated to the outstanding invoices, you can send all future invoices directly to me so that I could get paid on time. ~ hope it's okay ~ thank you ~ Terry ~.

    This error is caused because of the special characters in the string?

    Here is the ctl file that I use,

    OPTIONS (SKIP = 2)

    DOWNLOAD THE DATA

    CHARACTERSET WE8ISO8859P1

    INFILE '$FILE '.

    ADD

    IN THE TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC".

    WHEN (1)! = 'FOOTER ='

    FIELDS TERMINATED BY ' |'

    SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS)

    < nom_de_colonne >,

    < nom_de_colonne >,

    COMMENTS,

    < nom_de_colonne >,

    < nom_de_colonne >

    )

    Thanks in advance,

    Aditya

    Hello

    I suspect it's because of the construction in default length of character in sqldr data types - char (255) must take no notice of what the definition of the current table is by default.

    Try adding CHAR (2000), to your controlfile so you end up with something like this:

    OPTIONS (SKIP = 2)

    DOWNLOAD THE DATA

    CHARACTERSET WE8ISO8859P1

    INFILE '$FILE '.

    ADD

    IN THE TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC".

    WHEN (1)! = 'FOOTER ='

    FIELDS TERMINATED BY ' |'

    SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS)

    ,

    ,

    COMMENTS TANK (2000).

    ,

    )

    See you soon,.

    Harry

  • ORA-01124: cannot retrieve the data file 1 - file is in use or recovery

    I'm trying to recover the database in waiting, but it gives the error below.

    ORA-00283: cool cancelled due to errors
    ORA-01124: cannot retrieve the data file 1 - file is in use or recovery, the recovery is already said
    ORA-01110: data file 1: ' I:\ORACLE\QAS\SAPDATA1\SYSTEM_1\SYSTEM. DATA1'

    When I checked in the alert log recovery is not started. and later I hae given ' alter database recover Cancel "and the command to meet with the threshold.

    "media recovery has not started.

    It seems that the recovery was stuck between the two.
    Please advise me how to kill the recovery session that is stuck. because I don't want to bounce the database pending.

    Thanks in advance.

    Dataguard and MRP, you run a script before.

    In a standby scripted, a session to RETRIEVE the DATABASE would an UNTIL clause (SEQUENCE up to THAT most likely). At the end of the recovery at this point (SEQUENCE #), he left and stop at the database.

    In addition, the script is such that when a RECOVERY session is active, another session is not authorized to start. It can loop in pending state or go out and do it again the next scheduled interval.

    Apparently your startup script is not strong enough to prevent another session of RECOVERY to start even though the first is active (or it doesn't have a good up to THAT clause and stop, exit, closing stocks)

    What you have is a custom implementation of a database of pending. Without all the details of the script, the 'blocking' between sessions (to avoid a second RECOVER start when one is already running) etc... We can't really do much to help you.
    Your scripts must be standing with status information. It should be possible for you to discover the 'other' sqlplus session which emanates a DATABASE to RECOVER, but not yet out (p. ex... How about a simple "ps - ef |") grep sql' and ' ps - ef | combination of grep ora"?)

    Hemant K Collette

    Published by: Hemant K Collette on May 29, 2013 17:47

  • run the Idoc function in the data file returned by the service of GET_FILE

    Hello

    I'm new to this forum, so thank you in advance for any help and forgive me of any error with the post.

    I'm trying to force the execution of a custom Idoc function in a data file Complutense University of MADRID, when this data file is requested from the University Complutense of MADRID through service GET_FILE.

    The custom Idoc function is implemented as a filter of the computeFunction type. One of the datafile has appealed to my custom Idoc function:
    * < name wcm:element = "MainText" > [! - $myIdocFunction ()-] < / wcm:element > *.

    The data file is then downloaded with CRMI via service GET_FILE, but the Idoc function is not called.

    I tried to implement another filter Idoc type sendDataForServerResponse or sendDataForServerResponseBytes, that store objects cached responseString and responseBytes, personalized in order to look for any call to my function in the response object Idoc, eventually run the Idoc function and replace the output of the Idoc in the response. But this kind of filter will never run.

    The Idoc function myIdocFunction is executed correctly when I use WCM_PLACEHOLDER service to get a RegionTemplate (file .hcsp) associated with the data file. In this case, the fact RegionTemplate refers to the element of "MainText" data file with <!-$wcmElement ("MainText")->. But I need to make it work also with service GET_FILE.

    I use version 11.1.1.3.0 UCM.

    Any suggestion?
    Thank you very much
    Francesco

    Hello

    Thank you very much for your help and sorry for this late reply.

    Your trick to activate the complete detailed follow-up was helpful, because I found out I could somehow use the filter prepareForFileResponse for my purpose and I could also have related to the implementation of the native filter pdfwatermark. PdfwFileFilter .

    I managed to set up a filter whose purpose is to force the Idoc assessment of a predefined list of functions Idoc on the output returned by the service GET_FILE. Then I paste the code I have written, in which case it may be useful for other people. In any case, know that this filter can cause performance problems, which must be considered carefully in your own use cases.

    First set the filter in the set of filters in file .hda from your device:

    Filters @ResultSet

    4

    type

    location

    parameter

    loadOrder

    prepareForFileResponse

    mysamplecomponent. ForceIdocEvaluationFilter

    null

    1

    @end

    Here is a simplified version of the implementation of the filter:

    / public class ForceIdocEvaluationFilter implements FilterImplementor {}

    public int doFilter (workspace ws, linking DataBinder, ExecutionContext ctx) survey DataException, ServiceException {}

    Service string = binder.getLocal ("IdcService");

    String dDocName = binder.getLocal ("dDocName");

    Boolean isInternalCall = Boolean.parseBoolean (binder.getLocal ("isInternalCall"));

    If ((ctx instanceof FileService) & service.equals ("GET_FILE") &! isInternalCall) {}

    FileService fileService = ctx (FileService);

    checkToForceIdocEvaluation (dDocName, fileService);

    }

    continue with other filters

    Back to CONTINUE;

    }

    ' Private Sub checkToForceIdocEvaluation (String dDocName, FileService fileService) throws DataException, ServiceException {}

    PrimaryFile file = IOUtils.getContentPrimaryFile (dDocName);

    Ext = FileUtils.getExtension (primaryFile.getPath ());

    If (ext.equalsIgnoreCase ("xml")) {}

    forceIdocEvaluation (primaryFile, fileService);

    }

    }

    forceIdocEvaluation Private Sub (file primaryFile FileService fileService) throws ServiceException {}

    String multiplesContent = IOUtils.readStringFromFile (primaryFile);

    Replacement ForceIdocEvaluationPatternReplacer = new ForceIdocEvaluationPatternReplacer (fileService);

    String replacedContent = replacer.replace (fileContent);

    If (replacer.isMatchFound ()) {}

    setNewOutputOfService (fileService, replacedContent);

    }

    }

    ' Private Sub setNewOutputOfService (FileService fileService, String newOutput) throws ServiceException {}

    File newOutputFile = IOUtils.createTemporaryFile ("xml");

    IOUtils.saveFile (newOutput, newOutputFile);

    fileService.setFile (newOutputFile.getPath ());

    }

    }

    public class IOUtils {}

    public static getContentPrimaryFile (String dDocName) survey DataException, ServiceException {queue

    DataBinder serviceBinder = new DataBinder();

    serviceBinder.m_isExternalRequest = false;

    serviceBinder.putLocal ("IdcService", "GET_FILE");

    serviceBinder.putLocal ("dDocName", dDocName);

    serviceBinder.putLocal ("RevisionSelectionMethod", "Latest");

    serviceBinder.putLocal ("isInternalCall", "true");

    ServiceUtils.executeService (serviceBinder);

    String vaultFileName = DirectoryLocator.computeVaultFileName (serviceBinder);

    String vaultFilePath = DirectoryLocator.computeVaultPath (vaultFileName, serviceBinder);

    return new File (vaultFilePath);

    }

    public static String readStringFromFile (File sourceFile) throws ServiceException {}

    try {}

    return FileUtils.loadFile (sourceFile.getPath (), null, new String [] {"UTF - 8"});

    } catch (IOException e) {}

    throw new ServiceException (e);

    }

    }

    Public Shared Sub saveFile (String source, destination of the file) throws ServiceException {}

    FileUtils.writeFile (source, destination, "UTF - 8", 0, "is not save file" + destination);

    }

    public static getTemporaryFilesDir() leader throws ServiceException {}

    String idcDir = SharedObjects.getEnvironmentValue ("IntradocDir");

    String tmpDir = idcDir + "custom/MySampleComponent";

    FileUtils.checkOrCreateDirectory (tmpDir, 1);

    return new File (tmpDir);

    }

    public static createTemporaryFile (String fileExtension) leader throws ServiceException {}

    try {}

    The file TmpFile = File.createTempFile ("tmp", "." + fileExtension, IOUtils.getTemporaryFilesDir ());

    tmpFile.deleteOnExit ();

    return tmpFile;

    } catch (IOException e) {}

    throw new ServiceException (e);

    }

    }

    }

    Public MustInherit class PatternReplacer {}

    Private boolean matchFound = false;

    public string replace (CharSequence sourceString) throws ServiceException {}

    Matcher m = expand () .matcher (sourceString);

    StringBuffer sb = new StringBuffer (sourceString.length ());

    matchFound = false;

    While (m.find ()) {}

    matchFound = true;

    String matchedText = m.group (0);

    String replacement = doReplace (matchedText);

    m.appendReplacement (sb, Matcher.quoteReplacement (replacement));

    }

    m.appendTail (sb);

    Return sb.toString ();

    }

    protected abstract String doReplace(String textToReplace) throws ServiceException;

    public abstract Pattern getPattern() throws ServiceException;

    public boolean isMatchFound() {}

    Return matchFound;

    }

    }

    SerializableAttribute public class ForceIdocEvaluationPatternReplacer extends PatternReplacer {}

    private ExecutionContext ctx;

    idocPattern private model;

    public ForceIdocEvaluationPatternReplacer (ExecutionContext ctx) {}

    This.ctx = ctx;

    }

    @Override

    public getPattern() model throws ServiceException {}

    If (idocPattern == null) {}

    List of the functions = SharedObjects.getEnvValueAsList ("forceidocevaluation.functionlist");

    idocPattern = IdocUtils.createIdocPattern (functions);

    }

    Return idocPattern;

    }

    @Override

    protected String doReplace(String idocFunction) throws ServiceException {}

    Return IdocUtils.executeIdocFunction (ctx, idocFunction);

    }

    }

    public class IdocUtils {}

    public static String executeIdocFunction (ExecutionContext ctx, String idocFunction) throws ServiceException {}

    idocFunction = convertIdocStyle (idocFunction, IdocStyle.ANGULAR_BRACKETS);

    PageMerger activeMerger = (PageMerger) ctx.getCachedObject("PageMerger");

    try {}

    String output = activeMerger.evaluateScript (idocFunction);

    return output;

    } catch (Exception e) {}

    throw the new ServiceException ("cannot run the Idoc function" + idocFunction, e);

    }

    }

    public enum IdocStyle {}

    ANGULAR_BRACKETS,

    SQUARE_BRACKETS

    }

    public static String convertIdocStyle (String idocFunction, IdocStyle destinationStyle) {}

    String result = null;

    Switch (destinationStyle) {}

    case ANGULAR_BRACKETS:

    result = idocFunction.replace ("[!-$","<$").replace("--]", "$="">" "]");

    break;

    case SQUARE_BRACKETS:

    result = idocFunction.replace ("<$", "[!--$").replace("$="">", "-] '");

    break;

    }

    return the result;

    }

    public static model createIdocPattern ( list idocFunctions) throws ServiceException {}

    If (idocFunctions.isEmpty ()) throw new ServiceException ("list of Idoc functions to create a template for is empty");

    StringBuffer patternBuffer = new StringBuffer();

    model prefix

    patternBuffer.append ("(\\ [\\!--|)")<>

    Features GOLD - ed list

    for (int i = 0; i)

    patternBuffer.append (idocFunctions.get (i));

    If (i

    }

    model suffix

    patternBuffer.append ("") (. +?) (--\\]|\\$>)");

    String pattern = patternBuffer.toString ();

    log.trace ("Functions return Idoc model", model);

    Return Pattern.compile (pattern);

    }

    }

    public class ServiceUtils {}

    Private Shared Workspace getSystemWorkspace()}

    Workspace workspace = null;

    WsProvider provider = Providers.getProvider ("SystemDatabase");

    If (null! = wsProvider) {}

    workspace = wsProvider.getProvider ((workspace));

    }

    Returns the workspace;

    }

    getFullUserData private static UserData (String userName, cxt ExecutionContext, workspace ws) throws DataException, ServiceException {}

    If (null == ws) {}

    WS = getSystemWorkspace();

    }

    UserData userData is UserStorage.retrieveUserDatabaseProfileDataFull (name of user, ws, null, cxt, true, true);.

    ws.releaseConnection ();

    return userData;

    }

    public static executeService (DataBinder binder) Sub survey DataException, ServiceException {}

    get a connection to the database

    Workspace workspace = getSystemWorkspace();

    Look for a value of IdcService

    String cmd = binder.getLocal ("IdcService");

    If (null == cmd) {}

    throw new DataException("!csIdcServiceMissing");

    }

    get the service definition

    ServiceData serviceData = ServiceManager.getFullService (cmd);

    If (null == serviceData) {}

    throw new DataException (LocaleUtils.encodeMessage ("!")) csNoServiceDefined", null, cmd));

    }

    create the object for this service

    The service = ServiceManager.createService (serviceData.m_classID, workspace, null, Binder, serviceData);

    String userName = 'sysadmin ';

    UserData fullUserData = getFullUserData (username, service, workspace);

    service.setUserData (fullUserData);

    Binder.m_environment.put ("REMOTE_USER", username);

    try {}

    init service do not return HTML

    service.setSendFlags (true, true);

    create the ServiceHandlers and producers

    service.initDelegatedObjects ();

    do a safety check

    service.globalSecurityCheck ();

    prepare for service

    service.preActions ();

    run the service

    service.doActions ();

    } catch (ServiceException e) {}

    } {Finally

    service.cleanUp (true);

    If (null! = workspace) {}

    workspace.releaseConnection ();

    }

    }

    }

    }

  • sqlldr question: field in the data file exceeds the maximum length

    Hello friends,

    I am struggling with a load of simple data using sqlldr and hoping someone can guide me.

    Ref: I use Oracle 11.2 on Linux 5.7.
    ===========================
    Here is my table:
    SQL> desc ntwkrep.CARD
     Name                                                              Null?    Type
     ----------------------------------------------------------------- -------- ------------------
     CIM_DESCRIPTION                                                            VARCHAR2(255)
     CIM_NAME                                                          NOT NULL VARCHAR2(255)
     COMPOSEDOF                                                                 VARCHAR2(4000)
     DESCRIPTION                                                                VARCHAR2(4000)
     DISPLAYNAME                                                       NOT NULL VARCHAR2(255)
     LOCATION                                                                   VARCHAR2(4000)
     PARTOF                                                                     VARCHAR2(255)
     *REALIZES                                                                   VARCHAR2(4000)*
     SERIALNUMBER                                                               VARCHAR2(255)
     SYSTEMNAME                                                        NOT NULL VARCHAR2(255)
     TYPE                                                                       VARCHAR2(255)
     STATUS                                                                     VARCHAR2(255)
     LASTMODIFIED                                                               DATE
    When I try to load a text file data using sqlldr, I get the following errors on some files that do not charge.

    Example:
    =======
    Sheet 1: Rejected - error on the NTWKREP table. CARD, column REALIZES.
    Field in the data file exceeds the maximum length

    Looking at the actual data and count the characters for the data of the "CONSCIOUS" column, I see that it is basically a little more of 1000 characters.

    So try various ideas to solve the problem, I tried to change to "tank" nls_length_semantics and re-create the table, but this does not always helped and always got the same errors of loading data on the same lines.


    Then, I changed back to byte nls_length_semantics and recreated the table again.
    This time, I have changed the table manually as:
    SQL> ALTER TABLE ntwkrep.CARD MODIFY (REALIZES VARCHAR2(4000 char));
    
    Table altered.
    
    SQL> desc ntwkrep.card
     Name                                                              Null?    Type
     ----------------------------------------------------------------- -------- --------------------------------------------
     CIM_DESCRIPTION                                                            VARCHAR2(255)
     CIM_NAME                                                          NOT NULL VARCHAR2(255)
     COMPOSEDOF                                                                 VARCHAR2(4000)
     DESCRIPTION                                                                VARCHAR2(4000)
     DISPLAYNAME                                                       NOT NULL VARCHAR2(255)
     LOCATION                                                                   VARCHAR2(4000)
     PARTOF                                                                     VARCHAR2(255)
     REALIZES                                                                   VARCHAR2(4000 CHAR)
     SERIALNUMBER                                                               VARCHAR2(255)
     SYSTEMNAME                                                        NOT NULL VARCHAR2(255)
     TYPE                                                                       VARCHAR2(255)
     STATUS                                                                     VARCHAR2(255)
     LASTMODIFIED                                                               DATE
    Yet once, loading data failed with the same error on the same lines.

    So, this time, I thought that I would try to change the data type of column in a clob (navigation), and again, it is still impossible to load on the same lines.
    SQL> desc ntwkrep.CARD
     Name                                                              Null?    Type
     ----------------------------------------------------------------- -------- -----------------------
     CIM_DESCRIPTION                                                            VARCHAR2(255)
     CIM_NAME                                                          NOT NULL VARCHAR2(255)
     COMPOSEDOF                                                                 VARCHAR2(4000)
     DESCRIPTION                                                                VARCHAR2(4000)
     DISPLAYNAME                                                       NOT NULL VARCHAR2(255)
     LOCATION                                                                   VARCHAR2(4000)
     PARTOF                                                                     VARCHAR2(255)
     REALIZES                                                                   CLOB
     SERIALNUMBER                                                               VARCHAR2(255)
     SYSTEMNAME                                                        NOT NULL VARCHAR2(255)
     TYPE                                                                       VARCHAR2(255)
     STATUS                                                                     VARCHAR2(255)
     LASTMODIFIED                                                               DATE
    Any ideas?

    Here's a copy of the first line of data that fails to load each time any how to change the column 'TRUE' in the table.
    other(1)`CARD-mes-fhnb-bldg-137/1`  `other(1)`CARD-mes-fhnb-bldg-137/1 [other(1)]`HwVersion:C0|SwVersion:12.2(40)SE|Serial#:FOC1302U2S6|` Chassis::CHASSIS-mes-fhnb-bldg-137, Switch::mes-fhnb-bldg-137 ` Port::PORT-mes-fhnb-bldg-137/1.23, Port::PORT-mes-fhnb-bldg-137/1.21, Port::PORT-mes-fhnb-bldg-137/1.5, Port::PORT-mes-fhnb-bldg-137/1.7, Port::PORT-mes-fhnb-bldg-137/1.14, Port::PORT-mes-fhnb-bldg-137/1.12, Port::PORT-mes-fhnb-bldg-137/1.6, Port::PORT-mes-fhnb-bldg-137/1.4, Port::PORT-mes-fhnb-bldg-137/1.20, Port::PORT-mes-fhnb-bldg-137/1.22, Port::PORT-mes-fhnb-bldg-137/1.15, Port::PORT-mes-fhnb-bldg-137/1.13, Port::PORT-mes-fhnb-bldg-137/1.18, Port::PORT-mes-fhnb-bldg-137/1.24, Port::PORT-mes-fhnb-bldg-137/1.26, Port::PORT-mes-fhnb-bldg-137/1.17, Port::PORT-mes-fhnb-bldg-137/1.11, Port::PORT-mes-fhnb-bldg-137/1.2, Port::PORT-mes-fhnb-bldg-137/1.8, Port::PORT-mes-fhnb-bldg-137/1.10, Port::PORT-mes-fhnb-bldg-137/1.16, Port::PORT-mes-fhnb-bldg-137/1.9, Port::PORT-mes-fhnb-bldg-137/1.3, Port::PORT-mes-fhnb-bldg-137/1.1, Port::PORT-mes-fhnb-bldg-137/1.19, Port::PORT-mes-fhnb-bldg-137/1.25 `Serial#:FOC1302U2S6`mes-fhnb-bldg-137`other(1)
    Finally, for reference, here's the controlfile I use.
    load data
    infile '/opt/EMC/data/out/Card.txt'
    badfile '/dbadmin/data_loads/logs/Card.bad'
    append
    into table ntwkrep.CARD
    fields terminated by "`"
    TRAILING NULLCOLS
    (
    CIM_DESCRIPTION,
    CIM_NAME,
    COMPOSEDOF,
    DESCRIPTION,
    DISPLAYNAME,
    LOCATION,
    PARTOF,
    REALIZES,
    SERIALNUMBER,
    SYSTEMNAME,
    TYPE,
    STATUS,
    LASTMODIFIED "sysdate"
    )

    The default data in sqlldr type is char (255)

    Modify your control file following which I think should work with VARCHAR2 (4000) REALIZES:

    COMPOSEDOF char(4000),
    DESCRIPTION char(4000),
    LOCATION char(4000),
    REALIZES char(4000),
    
  • Move the data files on the standby server without moving primary Oracle 11 g r2

    Hi all


    Oracle 11g 2 with EBS 11.5.0.2 version.

    The size of one of the mounting points on the standby is full and I'm going through one of the data files to another mount point without making any changes to the primary level.

    I tried Google and came across this link:
    http://oraclesea.blogspot.in/2011/12/move-datafiles-on-standby-server.html
    But it did not work... I had to start the database with the spfile to work.

    The steps mentioned in the blog:
    Include below parameter in standby parameter file
    DB_FILE_NAME_CONVERT = '/primary_location/xyz.dbf','/standby_location/xyz.dbf'
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE cancel;
    shut immediate
    startup nomount pfile=initSCSL.ora
    alter database mount standby database ;
    alter system set standby_file_management='MANUAL' SCOPE=MEMORY ; 
    ! cp /primary_location/xyz.dbf'  /standby_location/xyz.dbf
    alter database rename file  '/primary_location/xyz.dbf' to '/standby_location/xyz.dbf';
    alter system set standby_file_management='AUTO' SCOPE=MEMORY ; 
    alter database recover managed standby database parallel 4 disconnect from session;
    Can you please help me with the right steps...


    Concerning
    KK

    Edited by: 903150 Sep 26, 2012 22:41

    Here is an example for you.

    Database pending:

    SQL> select status,instance_name,database_role from v$database,v$instance;
    
    STATUS       INSTANCE_NAME    DATABASE_ROLE
    ------------ ---------------- ----------------
    OPEN         srprim           PHYSICAL STANDBY
    
    SQL> select file_name from dba_data_files;
    
    FILE_NAME
    --------------------------------------------------------------------------------
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\USERS01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\UNDOTBS01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\SYSAUX01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\SYSTEM01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\USERS02.DBF
    
    SQL> select process,status,sequence# from v$managed_standby;
    
    PROCESS   STATUS        SEQUENCE#
    --------- ------------ ----------
    ARCH      CONNECTED             0
    ARCH      CONNECTED             0
    ARCH      CONNECTED             0
    ARCH      CONNECTED             0
    RFS       IDLE                  0
    RFS       IDLE                154
    MRP0      WAIT_FOR_LOG        154
    
    7 rows selected.
    
    SQL> show parameter name_convert
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    db_file_name_convert                 string
    log_file_name_convert                string
    SQL> alter database recover managed standby database cancel;
    
    Database altered.
    
    SQL> shut immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    M:\>copy C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\USERS02.DBF C:\APP\SHIVANANDA.RAO\ORADATA\DBTEST\USERS02.DBF
    1 file(s) copied.
    
    M:\>sqlplus sys/oracle@srprim as sysdba
    
    SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 27 14:57:16 2012
    
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    
    Connected to an idle instance.
    
    SQL> startup mount
    ORACLE instance started.
    
    Total System Global Area  778387456 bytes
    Fixed Size                  1374808 bytes
    Variable Size             494929320 bytes
    Database Buffers          276824064 bytes
    Redo Buffers                5259264 bytes
    Database mounted.
    SQL> alter system set standby_file_management=manual;
    
    System altered.
    
    SQL> alter database rename file 'C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\USERS02.DBF' to 'C:\APP\SHIVANANDA.RAO\ORADA
    A\DBTEST\USERS02.DBF';
    
    Database altered.
    
    SQL> alter database recover managed standby database disconnect from session;
    
    Database altered.
    
    SQL> select process,status,sequence# from v$managed_standby;
    
    PROCESS   STATUS        SEQUENCE#
    --------- ------------ ----------
    ARCH      CONNECTED             0
    ARCH      CONNECTED             0
    ARCH      CONNECTED             0
    ARCH      CONNECTED             0
    RFS       IDLE                  0
    RFS       IDLE                155
    MRP0      WAIT_FOR_LOG        155
    
    7 rows selected.
    
    SQL> select name from v$datafile;
    
    NAME
    --------------------------------------------------
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\SYSTEM01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\SYSAUX01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\UNDOTBS01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\SRPRIM\USERS01.DBF
    C:\APP\SHIVANANDA.RAO\ORADATA\DBTEST\USERS02.DBF
    

    1. you must close the database pending.
    2 copy the file that you want to move to the different mount point using the OS commands.
    3. Mount the standby database.
    4 rename the data through database level. Make sure that the standby_file_management is set to MANUAL
    5. start the MRP on the standby database.

    Please do not use more than one response to the thread ID. Because you created this thread with ID 903150, I propose to answer with the same ID, not the ID of the other.

  • The high water in the data file

    What is the concept of the high-water line in the data files? What is its use?

    >
    What is the concept of the high-water line in the data files? What is its use?
    >
    High waters indicates the end of the space used in a segment. which means no space beyond the mark has been used.

    The doc of database concepts
    http://docs.Oracle.com/CD/B28359_01/server.111/b28318/logical.htm
    >
    High tide is the boundary between the old and new space within a segment.
    >

    Two main uses are the direct path loads and loads Parallels. Each of these adds to the data above the high water mark of a segment and then moves the mark at the end of the data that have been added. These loads are faster, because the freelists and extensions and the blocks being loaded must not be checked for existing data.

    DBA Guide
    http://docs.Oracle.com/CD/E11882_01/server.112/e17120/tables004.htm
    >
    All INSERT operations of direct-path series of accesses, as well as parallel direct-path INSERT into tables partitioned, insert data above the high waters of the segment line hit.
    >
    Each of these documents has additional information on the processes used.

Maybe you are looking for

  • While charging battery drain

    Turbo charger does not work and when I connect the charger, then the battery is not on the contrary it is is unloaded.

  • Wait while Windows configures TestStand

    Hi all I'm having a problem when I run TestStand in some stations deployment, I get the "Please wait while Windows configures TestStand 4.1" message every time I start up the User Interface. I followed the instructions in the KB , but it does not sol

  • Help, audio stop working in XW6600 with nVidia NVS 315 graphics card

    Greetings: I just buy a renovated, XW6600 Xeon dual processor, 8 GB of RAM, Windows 7 Professional 64 bit system.  The PC comes with a nVidia NVS 290 graphics card.  Everything works fine and the speaker in the motherboard Jack produces sounds.  Howe

  • Why after installing Service Pack 2 in SBS 2003 my LCS 2005 have big problems

    Hi, I have 2 offices and in each office, I have a SBS Server 2003 with ISA 2004 and in one of the offices is also instaled the Live Communications 2005 server, users in an office to connect remotely to chat with other users, everything was perfect un

  • Windows Vista does not start in normal mode

    My Windows Vista-based computer does not start successfully in mode noirmal upward. I restart and it is very slow. Finally I can connect and my come almost all my desktop icons. Almost. At this point, it seems frozen. Left click and it becomes totall