On the use of the data file

Hi all

Last week I was simulating a crash of my database (Oracle 11.2.0.3, last group of patches) due to data files missing. So, I just removed the data file where my web application stores the data to see what's going to happen. But nothing happened, demand is rising and running and I was able to create new items in the user interface and save them. Normally, all these data are stored in oracle.

After that I created a few articles, I bounced the database. Then I restored and retrieved the data file with RMAN. All items I created after the deletion of the data file were there.

I would like to know, in which oracle order would write the DML commited to for recovery logs and the data file, so that I can explain the behavior of the application. I was wondering how the commited changes after deletion of the data file have been preserved even after the restoration of the data since the last backup RMAN file.

Thanks for any input.

Best regards

Coby.

every time that you agree to any transaction that the LGWR writes that transforms redo log buffer to restore the log file. When saturates redolog files then it launches checkpoint and writing each thin to the log file archive.

DBWR writes data to the buffer cache of datafile only if it doesnot have free tampons with him.

In your case, what could have happened is when you deleted the data file and made changes in the database. He could not write these changes to the data file and made the entery in the redo log file.

so, after you have to bounce the database and restore the database and retrieve using the archiving log. you'd get output like media recovery complete and all the changes made to the database once you have deleted the data file, all the changes has returned to recovery.

Tags: Database

Similar Questions

  • easy transfer can be used to reload the data files after formatting hard drive?

    I have been informed by level microsoft 2 I have to reformat my hard drive to fix the problem with windows backup (snapshot won't work).  easy transfer will copy all THE data files, including OUTLOOK 2007 on hard drive and then I can reinstall on reformatted drive.

    Easy transfer copy some things but not all things. He does more that can make a simple copy/paste (such as the settings for example) but, for example, not programs transfer (those who need to be put back in place).  Here is a description of what it does (and does not): http://support.microsoft.com/kb/928635.  It will transfer Outlook 2007 PDF files and other personal data if they are selected - you need to know which files you need and where to select them - but it won't transfer the program (which must be put back in place).  It must be run before you reformat the hard disk to collect data and then you can transfer everything what you have registered once the system is re-installled.  You would do the re-setup prior to the transfer.

    I hope this helps.

    Good luck!

    Lorien - MCSA/MCSE/network + / has + - if this post solves your problem, please click the 'Mark as answer' or 'Useful' button at the top of this message. Marking a post as answer, or relatively useful, you help others find the answer more quickly.

  • ORA-01124: cannot retrieve the data file 1 - file is in use or recovery

    I'm trying to recover the database in waiting, but it gives the error below.

    ORA-00283: cool cancelled due to errors
    ORA-01124: cannot retrieve the data file 1 - file is in use or recovery, the recovery is already said
    ORA-01110: data file 1: ' I:\ORACLE\QAS\SAPDATA1\SYSTEM_1\SYSTEM. DATA1'

    When I checked in the alert log recovery is not started. and later I hae given ' alter database recover Cancel "and the command to meet with the threshold.

    "media recovery has not started.

    It seems that the recovery was stuck between the two.
    Please advise me how to kill the recovery session that is stuck. because I don't want to bounce the database pending.

    Thanks in advance.

    Dataguard and MRP, you run a script before.

    In a standby scripted, a session to RETRIEVE the DATABASE would an UNTIL clause (SEQUENCE up to THAT most likely). At the end of the recovery at this point (SEQUENCE #), he left and stop at the database.

    In addition, the script is such that when a RECOVERY session is active, another session is not authorized to start. It can loop in pending state or go out and do it again the next scheduled interval.

    Apparently your startup script is not strong enough to prevent another session of RECOVERY to start even though the first is active (or it doesn't have a good up to THAT clause and stop, exit, closing stocks)

    What you have is a custom implementation of a database of pending. Without all the details of the script, the 'blocking' between sessions (to avoid a second RECOVER start when one is already running) etc... We can't really do much to help you.
    Your scripts must be standing with status information. It should be possible for you to discover the 'other' sqlplus session which emanates a DATABASE to RECOVER, but not yet out (p. ex... How about a simple "ps - ef |") grep sql' and ' ps - ef | combination of grep ora"?)

    Hemant K Collette

    Published by: Hemant K Collette on May 29, 2013 17:47

  • Defining the new path for the data files for restoring using the VALUE of NEWNAME FOR DATABASE

    Version: 11.2.0.3 Linux

    Today, I had to do a restore RMAN to a new server and I came across the post following RTO on the VALUE of NEWNAME FOR DATABASE

    ALTER database open resetlogs upgraded;         error to throw

    So, I thought to use it to indicate the new location of the data files to restore.

    That's what I did
    ===================

    Restore the control file and catalog items to backup using the command of CATALOGUE START WITH. Then I started the restoration
    $ rman target / cmdfile=restore.txt
    
    Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jul 26 04:40:41 2012
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: SPIKEY (DBID=2576163333, not open)
    
    RMAN> run
    2>  {
    3>  SET NEWNAME FOR DATABASE TO '/fnup/hwrc/oradata/spikey';
    4>  restore database  ;
    5>  }
    6>
    7>
    8>
    executing command: SET NEWNAME
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of set command at 07/26/2012 04:40:43
    RMAN-06970: NEWNAME '/fnup/hwrc/oradata/spikey' for database must include %f or %U format
    
    Recovery Manager complete.
    Don't know how it worked for Levi without %f or %U. So, I added %f
     $ vi restore.txt
     $ cat restore.txt
    run
     {
     SET NEWNAME FOR DATABASE TO '/fnup/hwrc/oradata/spikey/%f';
     restore database  ;
     }
    
    
     $ rman target / cmdfile=restore.txt
    
    Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jul 26 04:45:45 2012
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: SPIKEY (DBID=2576163333, not open)
    
    RMAN> run
    2>  {
    3>  SET NEWNAME FOR DATABASE TO '/fnup/hwrc/oradata/spikey/%f';
    4>  restore database  ;
    5>  }
    6>
    7>
    8>
    executing command: SET NEWNAME
    
    Starting restore at 26-JUL-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=19 device type=DISK
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to /fnup/hwrc/oradata/spikey/1
    channel ORA_DISK_1: restoring datafile 00002 to /fnup/hwrc/oradata/spikey/2
    channel ORA_DISK_1: restoring datafile 00003 to /fnup/hwrc/oradata/spikey/3
    channel ORA_DISK_1: restoring datafile 00004 to /fnup/hwrc/oradata/spikey/4
    channel ORA_DISK_1: restoring datafile 00005 to /fnup/hwrc/oradata/spikey/5
    channel ORA_DISK_1: restoring datafile 00006 to /fnup/hwrc/oradata/spikey/6
    channel ORA_DISK_1: restoring datafile 00007 to /fnup/hwrc/oradata/spikey/7
    channel ORA_DISK_1: restoring datafile 00008 to /fnup/hwrc/oradata/spikey/8
    channel ORA_DISK_1: restoring datafile 00009 to /fnup/hwrc/oradata/spikey/9
    channel ORA_DISK_1: reading from backup piece /u07/bkpfolder/SPIKEY_full_01nh0028_1_1_20120725.rmbk
    channel ORA_DISK_1: errors found reading piece handle=/u07/bkpfolder/SPIKEY_full_01nh0028_1_1_20120725.rmbk
    channel ORA_DISK_1: failover to piece handle=/u07/dump/bkpfolder/SPIKEY_full_01nh0028_1_1_20120725.rmbk tag=SPIKEY_FULL
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:56
    Finished restore at 26-JUL-12
    
    Recovery Manager complete.
    As you can see, RMAN restore data files to the desired location. But the data file names ended up as
    1
    2
    3
    .
    .      
    .
    9
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -----------| Holy Cow |-----------------------------


    So I had to rename each file as below
    $ mv 1 /fnup/hwrc/oradata/spikey/system01.dbf
    $ mv 2 /fnup/hwrc/oradata/spikey/sysaux01.dbf
    $ mv 3 /fnup/hwrc/oradata/spikey/undotbs01.dbf
    I would have been better in execution of the order for each data below file
    SET NEWNAME FOR DATAFILE
    Now, I think, there is no advantage in using NEWNAME SET of DATABASE to. Only the disadvantages. I did anything wrong above?

    Martin;

    On the issue of the VALUE of NEWNAME FOR DATABASE, you must specify at least one of the first three of the following substitution variables to avoid collisions of names: %b f % U. see semantic entry for TO 'filename' for a description of the possible substitution variables.

    You use %f

    %b
    
    Specifies the filename without the fully qualified directory path. For example, the datafile name /oradata/prod/financial.dbf is transformed to financial.dbf. This variable enables you to preserve the names of the datafiles while you move them to different directory. During backup, it can be used for the creation of image copies. The variable cannot be used for OMF datafiles or backup sets.
    
    %f
    
    Specifies the absolute file number of the datafile for which the new name is generated. For example, if datafile 2 is duplicated, then %f generates the value 2.
    
    %U
    
    Specifies a system-generated unique filename. The name is in the following format: data-D-%d_id-%I_TS-%N_FNO-%f. The %d variable specifies the database name. For example, a possible name might be data-D-prod_id-22398754_TS-users_FNO-7.
    

    Source - E10643-01

    Backup and recovery reference

    http://docs.Oracle.com/CD/E14072_01/backup.112/e10643/rcmsynta2014.htm

    I see CKPT and I agree on that!

    Best regards

    mseberg

  • ID of file of the data file created using CREATE the DATA file

    Version: 10 gr 2

    I lost a test data file that was created in the morning. No available backup. I have all logs archived.

    Question1.
    Here are the steps to recreate. Right?
    1. alter tablespace <tablespace_name> offline immediate;  --- should this be offline drop?
    2. alter database create datafile '/u03/oradata/dbtst/CLM01.dbf';
    3. recover tablespace <tablespace_name>;
    4. alter tablespace <tablespace_name> online;
    Question2.
    The newly created file will retain the ID file from the original file that was lost?

    Since you're on 10g, you have not manually create the data file. Check recovery of a Datafile without backup lost: example for more details.

    The identifier must remain the same.

  • Generate the analog waveform based on the data file

    I want to create an analog voltage output that follows I have a data file (excel, csv, text (which is easy)).  The data file creates a waveform with equal time between steps (dT =.0034 sec).  After the output through all the data points, I want it repeat indefinitely.

    What is the best way to create the waveform of a data file?

    To create a type of waveform data, calculate the dt by subtracting two values in column 1 and get the array of Y from column 2. If you save the file as a comma separated or tab text file, you can then use the spreadsheet file read. After obtaining a 2D array, you would use the index table and the subset of table functions.

    Assuming you use a capture card data OR for the output signal, you can pass a type of waveform data to a writing DAQmx and set for the generation of types.

  • Open the data file Outlook - access denied

    I have an Outlook data file created and used when I was running Outlook 2003 on Windows XP - Pro. I have a new computer with Windows 7 Home Premium. I'm still on Outlook 2003. When I try to open the data file, I get - file access is denied. You don't have permission to access the file C:\Users\Valued Customer\Outlook_xxx.pstr

    I am logged in as administrator.
    The permissions for all folders in the path and the file itself grant full control.

    I changed the owner to my new login name.

    NG wrong - try: http://social.answers.microsoft.com/Forums/en-US/category/officeoutlook TaurArian [MVP] 2005-2010-implementation to date of Services

  • How can I re - attach the data files for the programs?

    Original title:

    reconnection of the files

    BONE had to be reinstalled.  All data is saved but lost programs.  Programs now reinstalled, but how can I re - attach the data files for the programs?

    Hello

    You copy the data on your computer (Documents, Photos, etc.) and file extensions should be automatically associated with programs they have written in.

    Otherwise:

    "Changing programs by default by using Set Program Access and defaults of the computer"

    http://Windows.Microsoft.com/en-us/Windows/set-program-access-computer-defaults#1TC=Windows-7

    "How to change file Associations in Windows 7 and Windows 8.

    http://www.7tutorials.com/how-associate-file-type-or-protocol-program

    See you soon.

  • ORA-19846: cannot read the header of the data file of the remote site 21

    Hello

    I have a situation or I can say a scenario. It is purely for testing base. Database is on 12.1.0.1 on a Linux box using ASM (OMF).

    Standby is created on another machine with the same platform and who also uses ASM (OMF) and is in phase with the primary. Now, suppose I have create a PDB file on the primary of the SEED and it is created successfully.

    After that is a couple of log, do it again passes to the waiting, but MRP fails because of naming conventions. Agree with that! Now, on the primary, I remove the newly created PDB (coward the PDB newly created). Once again a couple of switches of newspapers which is passed on to the wait. Of course, the wait is always out of sync.

    Now, how to get back my watch in sync with the primary? I can't roll method until the required data (new PDB) file does not exist on the main site as well. I get the following error:

    RMAN > recover database service prim noredo using backupset compressed;

    To go back to November 8, 15

    using the control file of the target instead of recovery catalog database

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 70 = device = DISK stby type instance

    RMAN-00571: ===========================================================

    RMAN-00569: = ERROR MESSAGE STACK FOLLOWS =.

    RMAN-00571: ===========================================================

    RMAN-03002: failure of the command recover at the 18:55:32 08/11/2015

    ORA-19846: cannot read the header of the data file of the remote site 21

    The clues on how to I go ahead? Of course, recreating the eve is an option as its only based on test, but I don't want recreation.

    Thank you.

    I tried like below:

    1 a incremental backup of the primary of the CNS where off the eve also taken primary backup controlfile as Eve format.

    2 copy the backup of the watch parts, catalogged them on the day before.

    3 recovered Eve with noredo option - it fails here with the same error pointing to the 21 data file.

    OK, understood. Try not to get back the day before first, rather than restore the controlfile and then perform the restoration.

    Make it like:

    1. take incremental backup of primary SNA, also ensures the backup controlfile format.

    2. copy pending, get the location of the data file (names) by querying v$ datafile on the eve. Restore the controlfile ensures from the backup controlfile you took on primary and mount.

    3. Since you are using OMF, the path of primary and standby data file will be different. (/). If you require catalog data from the database files pending.

    (Reason: you restore controlfile from elementary to step 2, which takes place from the main access road). Use the details that you obtained in step 2 and catalog them.

    4. turn the database copy by RMAN. (RMAN > switch database to copy ;))

    5 Catalog backup items that you copied in step 2.

    6. recover the standby database using 'noredo' option.

    7. finally start the MRP. This should solve your problem.

    The reason I say this works is because here, you restore the controlfile to primary first, which will not have details 21, datafile, and then you are recovering. So it must succeed.

    In the previous method, you tried to first collect all the day before, and then restore the controlfile. While remedial classes, always watch seeks datafile 21 as he controlfile is not yet updated.

    HTH

    -Jonathan Rolland

  • RMAN issues - no backup or copy of the data file found

    Oracle 11 g 2

    Linux RHEL 6.5

    I inherited a database backup and restore question since the DBA is OoO.

    Here is the script used for the backup:

    Configure default device the disk type;
    Configure controlfile autobackup on;
    Configure controlfile autobackup peripheral type disc format in ' / u01/app/oracle/bkp/controlfile/%F.ctl';
    Configure retention policy to recovery of 30-day window;
    View all;
    Run {}
    stop immediately;
    bootable media;
    allocate channel dup1 device type disk;
    allocate channel dup2 device type disk;
    SQL "create pfile =" /u01/app/oracle/bkp/pfile/initpfile.ora "of spfile;
    backup format ' / u01/app/oracle/bkp/cold_db/cold_bkp_%U' database;
    output channel dup1;
    output channel dup2;
    ALTER database open;
    }

    When I try the following restore script:

    run
    {
    Start pfile='/u01/app/oracle/bkp/pfile/initpfile.ora' nomount;
    Restore controlfile to ' / u01/app/oracle/bkp/controlfile/c-123131414-20140509-00.ctl';
    change the editing of the database;
    restore the database;
    ALTER database open resetlogs;
    }

    I get error RMAN-06023: no backup or copy of the data file found

    I'm trying to restore a database backup from 5 days ago and I use this backup control file.

    I'll close this discussion and continue to involve the Oracle.  Thank you all for your help.

  • You cannot change the data file

    Unable to Alter Database Datafile

    Hi all

    Using Alter database datafile ' C:\Oracle\APP\ORADATA\...\USERS01. DBF' SIZE 100M;

    getting this error.

    SQL > Alter database datafile ' C:\Oracle\APP\ORADATA\...\USERS01. DBF' SIZE 100M;
    Error report:
    SQL error: ORA-01237: do not extend datafile 4

    ORA-01110: data file 4: ' C:\Oracle\APP\ORADATA\...\USERS01. DBF'

    ORA-27059: reduce file size

    OSD-04005: SetFilterPointer() failure, unable to read the file

    s/o-error: (OS 112) there is not enough space on the disk.

    01237 00000 - "cannot extend %s datafile.

    * cause: year Operating system error has occurred during resizing.

    * Action: Resolve the cause of the error of operating system and start the command

    I understand that the OS space is full and I guess this isn't specific Oracle error, but certainly an OS level error, someone could suggest me how to deal with the erasure of space, is it possible to erase and reuse the data files? Please suggest.

    Help out me

    Thank you and best regards,
    Cabbage

    You need create more space by deleting unnecessary files or narrowing of the files.

  • Field in the data file exceeds the maximum length - CTL file error

    Hello

    I load data into the new system using the CTL file. But I get the error message 'field in the data file exceeds the maximum length "for few records, other records are processed successfully." " I checked the length of the error record in the extracted file, it is less than the length of the target table, VARCHAR2 (2000 bytes). Here is an example of error data,


    Hi Rebecca ~ I just talk to our Finance Department and they agreed that ABC payments can be allocated to the outstanding invoices, you can send all future invoices directly to me so that I could get paid on time. ~ hope it's okay ~ thank you ~ Terry ~.

    This error is caused because of the special characters in the string?

    Here is the ctl file that I use,

    OPTIONS (SKIP = 2)

    DOWNLOAD THE DATA

    CHARACTERSET WE8ISO8859P1

    INFILE '$FILE '.

    ADD

    IN THE TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC".

    WHEN (1)! = 'FOOTER ='

    FIELDS TERMINATED BY ' |'

    SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS)

    < nom_de_colonne >,

    < nom_de_colonne >,

    COMMENTS,

    < nom_de_colonne >,

    < nom_de_colonne >

    )

    Thanks in advance,

    Aditya

    Hello

    I suspect it's because of the construction in default length of character in sqldr data types - char (255) must take no notice of what the definition of the current table is by default.

    Try adding CHAR (2000), to your controlfile so you end up with something like this:

    OPTIONS (SKIP = 2)

    DOWNLOAD THE DATA

    CHARACTERSET WE8ISO8859P1

    INFILE '$FILE '.

    ADD

    IN THE TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC".

    WHEN (1)! = 'FOOTER ='

    FIELDS TERMINATED BY ' |'

    SURROUNDED OF POSSIBLY "" "

    TRAILING NULLCOLS)

    ,

    ,

    COMMENTS TANK (2000).

    ,

    )

    See you soon,.

    Harry

  • When OMF add the data file in the tablespace

    Hi friends,

    We use oracle 11.2 with CMS (oracle managed file) for the database on the linux platform. the parameter - DB_CREATE_FILE_DEST had been set. at the present time, we received a message from global warming this tablespace criterion is 85% full.

    According to the document of the oracle, OMF auto will add a data file in the tablespace tast. more than a week, I do not see any data file has been added by OMF.

    I want to know when OMF adds the data file in the tablespace? 85% or 95% or some (parameter) setting this action?

    Thank you

    newdba

    OMF does not automatically add a new data file.  You must explicitly add a new data file with the ALTER TABLESPACE ADD DATAFILE command tbsname.

    What is OMF is to provide a Unique name and a default size (100 MB) for the data file.  That is why the ALTER TABLESPACE... Command ADD DATAFILE that you run didn't need to specify the file size or file name.

    Hemant K Collette

  • When loading, error: field in the data file exceeds the maximum length

    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

    PL/SQL Release 11.2.0.3.0 - Production

    CORE Production 11.2.0.3.0

    AMT for Solaris: 11.2.0.3.0 - Production Version

    NLSRTL Version 11.2.0.3.0 - Production

    I am trying to load a table, small size (110 lines, 6 columns).  One of the columns, called NOTES is less error when I run the load.  That is to say that the size of the column exceeds the limit max.  As you can see here, the column of the table is equal to 4000 bytes)

    CREATE TABLE NRIS. NRN_REPORT_NOTES

    (

    Sys_guid() NOTES_CN VARCHAR2 (40 BYTE) DEFAULT is NOT NULL.

    REPORT_GROUP VARCHAR2 (100 BYTE) NOT NULL,

    POSTCODE VARCHAR2 (50 BYTE) NOT NULL,

    ROUND NUMBER (3) NOT NULL,

    VARCHAR2 (4000 BYTE) NOTES,

    LAST_UPDATE TIMESTAMP (6) WITH ZONE SCHEDULE systimestamp NOT NULL default

    )

    TABLESPACE USERS

    RESULT_CACHE (DEFAULT MODE)

    PCTUSED 0

    PCTFREE 10

    INITRANS 1

    MAXTRANS 255

    STORAGE)

    80K INITIAL

    ACCORDING TO 1 M

    MINEXTENTS 1

    MAXEXTENTS UNLIMITED

    PCTINCREASE 0

    DEFAULT USER_TABLES

    DEFAULT FLASH_CACHE

    DEFAULT CELL_FLASH_CACHE

    )

    LOGGING

    NOCOMPRESS

    NOCACHE

    NOPARALLEL

    MONITORING;

    I did a little investigating, and it does not match.

    When I run

    Select max (lengthb (notes)) in NRIS. NRN_REPORT_NOTES

    I got a return of

    643

    .

    Which tells me that the larger size of this column is only 643 bytes.  But EACH insert is a failure.

    Here is the header of the file loader and first couple of inserts:

    DOWNLOAD THE DATA

    INFILE *.

    BADFILE '. / NRIS. NRN_REPORT_NOTES. BAD'

    DISCARDFILE '. / NRIS. NRN_REPORT_NOTES. DSC"

    ADD IN THE NRIS TABLE. NRN_REPORT_NOTES

    Fields ended by '; '. Eventually framed by ' |'

    (

    NOTES_CN,

    REPORT_GROUP,

    Zip code

    ALL ABOUT NULLIF (R = 'NULL'),

    NOTES,

    LAST_UPDATE TIMESTAMP WITH TIME ZONE ' MM/DD/YYYY HH24:MI:SS. FF9 TZR' NULLIF (LAST_UPDATE = 'NULL')

    )

    BEGINDATA

    | E2ACF256F01F46A7E0440003BA0F14C2; | | DEMOGRAPHIC DATA |; A01003; | 3 ; | demographic results show that 46% of visits are made by women.  Among racial and ethnic minorities, the most often encountered are native American (4%) and Hispanic / Latino (2%).  The breakdown by age shows that the Bitterroot has a relatively low of children under 16 (14%) proportion in the population of visit.  People over 60 represent about 22% of visits.   Most of the visitation comes from the region.  More than 85% of the visits come from people who live within 50 miles. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    | E2ACF256F02046A7E0440003BA0F14C2; | | DESCRIPTION OF THE VISIT; | | A01003; | 3 ; | most visits to the Bitterroot are relatively short.  More than half of the visits last less than 3 hours.  The median duration of visiting sites for the night is about 43 hours, or about 2 days.  The average Wilderness visit lasts only about 6 hours, although more than half of these visits are shorter than the duration of 3 hours.   Most of the visits come from people who are frequent visitors.  Over thirty percent are made by people who visit between 40 and 100 times a year.  Another 8% of visits from people who say they visit more than 100 times a year. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    | E2ACF256F02146A7E0440003BA0F14C2; | | ACTIVITIES |. A01003; | 3 ; | most often reported the main activity is hiking (42%), followed by alpine skiing (12%) and hunting (8%).  More than half of the report visits participating in the relaxation and the display landscape. | ; 29/07/2013 0, 16:09:27.000000000 - 06:00

    Here's the full start of log loader, ending after the return of the first row.  (They ALL say the same error)

    SQL * Loader: Release 10.2.0.4.0 - Production Thu Aug 22 12:09:07 2013

    Copyright (c) 1982, 2007, Oracle.  All rights reserved.

    Control file: NRIS. NRN_REPORT_NOTES. CTL

    Data file: NRIS. NRN_REPORT_NOTES. CTL

    Bad File:. / NRIS. NRN_REPORT_NOTES. BAD

    Discard File:. / NRIS. NRN_REPORT_NOTES. DSC

    (Allow all releases)

    Number of loading: ALL

    Number of jump: 0

    Authorized errors: 50

    Link table: 64 lines, maximum of 256000 bytes

    Continuation of the debate: none is specified

    Path used: classics

    NRIS table. NRN_REPORT_NOTES, loaded from every logical record.

    Insert the option in effect for this table: APPEND

    Column Position Len term Encl. Datatype name

    ------------------------------ ---------- ----- ---- ---- ---------------------

    FIRST NOTES_CN *;  O (|) CHARACTER

    REPORT_GROUP NEXT *;  O (|) CHARACTER

    AREA CODE FOLLOWING *;  O (|) CHARACTER

    ROUND                                NEXT     *   ;  O (|) CHARACTER

    NULL if r = 0X4e554c4c ('NULL' character)

    NOTES                                NEXT     *   ;  O (|) CHARACTER

    LAST_UPDATE NEXT *;  O (|) DATETIME MM/DD/YYYY HH24:MI:SS. FF9 TZR

    NULL if LAST_UPDATE = 0X4e554c4c ('NULL' character)

    Sheet 1: Rejected - error in NRIS table. NRN_REPORT_NOTES, information ABOUT the column.

    Field in the data file exceeds the maximum length.

    I don't see why this should be failed.

    Hello

    the problem is bounded by default, char (255) data... Very useful, I know...

    you need two, IE sqlldr Hat data is longer than this.

    so change notes to notes char (4000) you control file and it should work.

    see you soon,

    Harry

  • run the Idoc function in the data file returned by the service of GET_FILE

    Hello

    I'm new to this forum, so thank you in advance for any help and forgive me of any error with the post.

    I'm trying to force the execution of a custom Idoc function in a data file Complutense University of MADRID, when this data file is requested from the University Complutense of MADRID through service GET_FILE.

    The custom Idoc function is implemented as a filter of the computeFunction type. One of the datafile has appealed to my custom Idoc function:
    * < name wcm:element = "MainText" > [! - $myIdocFunction ()-] < / wcm:element > *.

    The data file is then downloaded with CRMI via service GET_FILE, but the Idoc function is not called.

    I tried to implement another filter Idoc type sendDataForServerResponse or sendDataForServerResponseBytes, that store objects cached responseString and responseBytes, personalized in order to look for any call to my function in the response object Idoc, eventually run the Idoc function and replace the output of the Idoc in the response. But this kind of filter will never run.

    The Idoc function myIdocFunction is executed correctly when I use WCM_PLACEHOLDER service to get a RegionTemplate (file .hcsp) associated with the data file. In this case, the fact RegionTemplate refers to the element of "MainText" data file with <!-$wcmElement ("MainText")->. But I need to make it work also with service GET_FILE.

    I use version 11.1.1.3.0 UCM.

    Any suggestion?
    Thank you very much
    Francesco

    Hello

    Thank you very much for your help and sorry for this late reply.

    Your trick to activate the complete detailed follow-up was helpful, because I found out I could somehow use the filter prepareForFileResponse for my purpose and I could also have related to the implementation of the native filter pdfwatermark. PdfwFileFilter .

    I managed to set up a filter whose purpose is to force the Idoc assessment of a predefined list of functions Idoc on the output returned by the service GET_FILE. Then I paste the code I have written, in which case it may be useful for other people. In any case, know that this filter can cause performance problems, which must be considered carefully in your own use cases.

    First set the filter in the set of filters in file .hda from your device:

    Filters @ResultSet

    4

    type

    location

    parameter

    loadOrder

    prepareForFileResponse

    mysamplecomponent. ForceIdocEvaluationFilter

    null

    1

    @end

    Here is a simplified version of the implementation of the filter:

    / public class ForceIdocEvaluationFilter implements FilterImplementor {}

    public int doFilter (workspace ws, linking DataBinder, ExecutionContext ctx) survey DataException, ServiceException {}

    Service string = binder.getLocal ("IdcService");

    String dDocName = binder.getLocal ("dDocName");

    Boolean isInternalCall = Boolean.parseBoolean (binder.getLocal ("isInternalCall"));

    If ((ctx instanceof FileService) & service.equals ("GET_FILE") &! isInternalCall) {}

    FileService fileService = ctx (FileService);

    checkToForceIdocEvaluation (dDocName, fileService);

    }

    continue with other filters

    Back to CONTINUE;

    }

    ' Private Sub checkToForceIdocEvaluation (String dDocName, FileService fileService) throws DataException, ServiceException {}

    PrimaryFile file = IOUtils.getContentPrimaryFile (dDocName);

    Ext = FileUtils.getExtension (primaryFile.getPath ());

    If (ext.equalsIgnoreCase ("xml")) {}

    forceIdocEvaluation (primaryFile, fileService);

    }

    }

    forceIdocEvaluation Private Sub (file primaryFile FileService fileService) throws ServiceException {}

    String multiplesContent = IOUtils.readStringFromFile (primaryFile);

    Replacement ForceIdocEvaluationPatternReplacer = new ForceIdocEvaluationPatternReplacer (fileService);

    String replacedContent = replacer.replace (fileContent);

    If (replacer.isMatchFound ()) {}

    setNewOutputOfService (fileService, replacedContent);

    }

    }

    ' Private Sub setNewOutputOfService (FileService fileService, String newOutput) throws ServiceException {}

    File newOutputFile = IOUtils.createTemporaryFile ("xml");

    IOUtils.saveFile (newOutput, newOutputFile);

    fileService.setFile (newOutputFile.getPath ());

    }

    }

    public class IOUtils {}

    public static getContentPrimaryFile (String dDocName) survey DataException, ServiceException {queue

    DataBinder serviceBinder = new DataBinder();

    serviceBinder.m_isExternalRequest = false;

    serviceBinder.putLocal ("IdcService", "GET_FILE");

    serviceBinder.putLocal ("dDocName", dDocName);

    serviceBinder.putLocal ("RevisionSelectionMethod", "Latest");

    serviceBinder.putLocal ("isInternalCall", "true");

    ServiceUtils.executeService (serviceBinder);

    String vaultFileName = DirectoryLocator.computeVaultFileName (serviceBinder);

    String vaultFilePath = DirectoryLocator.computeVaultPath (vaultFileName, serviceBinder);

    return new File (vaultFilePath);

    }

    public static String readStringFromFile (File sourceFile) throws ServiceException {}

    try {}

    return FileUtils.loadFile (sourceFile.getPath (), null, new String [] {"UTF - 8"});

    } catch (IOException e) {}

    throw new ServiceException (e);

    }

    }

    Public Shared Sub saveFile (String source, destination of the file) throws ServiceException {}

    FileUtils.writeFile (source, destination, "UTF - 8", 0, "is not save file" + destination);

    }

    public static getTemporaryFilesDir() leader throws ServiceException {}

    String idcDir = SharedObjects.getEnvironmentValue ("IntradocDir");

    String tmpDir = idcDir + "custom/MySampleComponent";

    FileUtils.checkOrCreateDirectory (tmpDir, 1);

    return new File (tmpDir);

    }

    public static createTemporaryFile (String fileExtension) leader throws ServiceException {}

    try {}

    The file TmpFile = File.createTempFile ("tmp", "." + fileExtension, IOUtils.getTemporaryFilesDir ());

    tmpFile.deleteOnExit ();

    return tmpFile;

    } catch (IOException e) {}

    throw new ServiceException (e);

    }

    }

    }

    Public MustInherit class PatternReplacer {}

    Private boolean matchFound = false;

    public string replace (CharSequence sourceString) throws ServiceException {}

    Matcher m = expand () .matcher (sourceString);

    StringBuffer sb = new StringBuffer (sourceString.length ());

    matchFound = false;

    While (m.find ()) {}

    matchFound = true;

    String matchedText = m.group (0);

    String replacement = doReplace (matchedText);

    m.appendReplacement (sb, Matcher.quoteReplacement (replacement));

    }

    m.appendTail (sb);

    Return sb.toString ();

    }

    protected abstract String doReplace(String textToReplace) throws ServiceException;

    public abstract Pattern getPattern() throws ServiceException;

    public boolean isMatchFound() {}

    Return matchFound;

    }

    }

    SerializableAttribute public class ForceIdocEvaluationPatternReplacer extends PatternReplacer {}

    private ExecutionContext ctx;

    idocPattern private model;

    public ForceIdocEvaluationPatternReplacer (ExecutionContext ctx) {}

    This.ctx = ctx;

    }

    @Override

    public getPattern() model throws ServiceException {}

    If (idocPattern == null) {}

    List of the functions = SharedObjects.getEnvValueAsList ("forceidocevaluation.functionlist");

    idocPattern = IdocUtils.createIdocPattern (functions);

    }

    Return idocPattern;

    }

    @Override

    protected String doReplace(String idocFunction) throws ServiceException {}

    Return IdocUtils.executeIdocFunction (ctx, idocFunction);

    }

    }

    public class IdocUtils {}

    public static String executeIdocFunction (ExecutionContext ctx, String idocFunction) throws ServiceException {}

    idocFunction = convertIdocStyle (idocFunction, IdocStyle.ANGULAR_BRACKETS);

    PageMerger activeMerger = (PageMerger) ctx.getCachedObject("PageMerger");

    try {}

    String output = activeMerger.evaluateScript (idocFunction);

    return output;

    } catch (Exception e) {}

    throw the new ServiceException ("cannot run the Idoc function" + idocFunction, e);

    }

    }

    public enum IdocStyle {}

    ANGULAR_BRACKETS,

    SQUARE_BRACKETS

    }

    public static String convertIdocStyle (String idocFunction, IdocStyle destinationStyle) {}

    String result = null;

    Switch (destinationStyle) {}

    case ANGULAR_BRACKETS:

    result = idocFunction.replace ("[!-$","<$").replace("--]", "$="">" "]");

    break;

    case SQUARE_BRACKETS:

    result = idocFunction.replace ("<$", "[!--$").replace("$="">", "-] '");

    break;

    }

    return the result;

    }

    public static model createIdocPattern ( list idocFunctions) throws ServiceException {}

    If (idocFunctions.isEmpty ()) throw new ServiceException ("list of Idoc functions to create a template for is empty");

    StringBuffer patternBuffer = new StringBuffer();

    model prefix

    patternBuffer.append ("(\\ [\\!--|)")<>

    Features GOLD - ed list

    for (int i = 0; i)

    patternBuffer.append (idocFunctions.get (i));

    If (i

    }

    model suffix

    patternBuffer.append ("") (. +?) (--\\]|\\$>)");

    String pattern = patternBuffer.toString ();

    log.trace ("Functions return Idoc model", model);

    Return Pattern.compile (pattern);

    }

    }

    public class ServiceUtils {}

    Private Shared Workspace getSystemWorkspace()}

    Workspace workspace = null;

    WsProvider provider = Providers.getProvider ("SystemDatabase");

    If (null! = wsProvider) {}

    workspace = wsProvider.getProvider ((workspace));

    }

    Returns the workspace;

    }

    getFullUserData private static UserData (String userName, cxt ExecutionContext, workspace ws) throws DataException, ServiceException {}

    If (null == ws) {}

    WS = getSystemWorkspace();

    }

    UserData userData is UserStorage.retrieveUserDatabaseProfileDataFull (name of user, ws, null, cxt, true, true);.

    ws.releaseConnection ();

    return userData;

    }

    public static executeService (DataBinder binder) Sub survey DataException, ServiceException {}

    get a connection to the database

    Workspace workspace = getSystemWorkspace();

    Look for a value of IdcService

    String cmd = binder.getLocal ("IdcService");

    If (null == cmd) {}

    throw new DataException("!csIdcServiceMissing");

    }

    get the service definition

    ServiceData serviceData = ServiceManager.getFullService (cmd);

    If (null == serviceData) {}

    throw new DataException (LocaleUtils.encodeMessage ("!")) csNoServiceDefined", null, cmd));

    }

    create the object for this service

    The service = ServiceManager.createService (serviceData.m_classID, workspace, null, Binder, serviceData);

    String userName = 'sysadmin ';

    UserData fullUserData = getFullUserData (username, service, workspace);

    service.setUserData (fullUserData);

    Binder.m_environment.put ("REMOTE_USER", username);

    try {}

    init service do not return HTML

    service.setSendFlags (true, true);

    create the ServiceHandlers and producers

    service.initDelegatedObjects ();

    do a safety check

    service.globalSecurityCheck ();

    prepare for service

    service.preActions ();

    run the service

    service.doActions ();

    } catch (ServiceException e) {}

    } {Finally

    service.cleanUp (true);

    If (null! = workspace) {}

    workspace.releaseConnection ();

    }

    }

    }

    }

Maybe you are looking for