Issue to the creation/editing UCM contributor data file

Hello

We have the requirements below:

(a) create a contributor by program data file, and not using the contributor of the Studio Site for a definition of region RD1, by using a custom, template not the model of default.xml provided by the Studio (SS) Site.

I created a data contributor using RD1 file and set it as your primary file to create other files of data for this RD1. I used CHECKIN_NEW IDC service to create the data file. PFA the code snippet. I was able to successfully create the data file.

(b) change the correct display of data file

However when I try to modify the data of the University Complutense of MADRID, I get the error message. PFA file error message.

I would like to know if there is no Site Studio IDC service to create web resources based on the custom template. I want to create assets web Webcenter application and no contributor Site Studio.

Hi Nikhil,

Try this piece of code:

DataBinder dataBinder = idcClient.createBinder ();

dataBinder.putLocal ("IdcService", "SS_CHECKIN_NEW");

dataBinder.putLocal ("xRegionDefinition", prop.getProperty ("regiondefinition"));

dataBinder.putLocal ("dDocTitle", prop.getProperty ("title"));

dataBinder.putLocal ("dSecurityGroup", "Public");

dataBinder.putLocal ("ssDefaultDocumentToken", "SSContributorDataFile");

dataBinder.addFile ("primaryFile", new leader (prop.getProperty ("filename")));

dataBinder.putLocal ("xWebsiteObjectType", "Data file");

dataBinder.putLocal ("dDocType", "Document");

dataBinder.putLocal ("dataFileFieldValue", "Data file");  This parameter is passed when no particular site is selected in the menu drop-down

Here I've created a companion file where in all connection and defintion region ucm to be fixed this CDF are defined.

filename - this one is set to default.xml (to check normal working)

regiondefinition - this is the content of Dr id to which is attached the cdf.

title - title of the element content according to customer's requirement.

With this code, test and whether the requirement that you are looking for is completed and update results.

Thank you

Srinath

Tags: Fusion Middleware

Similar Questions

  • Get the custom IdocScript function contributor data file

    Hello

    I have a requirement to retrieve a number of elements of a given contributor data file. The problem is, I don't know how to retrieve the XML code inside a custom component of IdocScript.

    Given the content of the data contributor file ID, is there an easy way to get the XML file, or do I need to use with a service call?

    Thank you!

    If it's a Java component, why not just get the direct file system XML file? The demand function, you could have a file path in the binder... If not, use the IdcFileDescriptor object:

    https://groups.Yahoo.com/neo/groups/intradoc_users/conversations/topics/25937

  • Content Presenter make several Site Studio contributor data files

    Hello

    I followed the A-team solution http://www.ateam-oracle.com/content-presenter-cmis-complete/ to display the Site Studio contributor data file. But I made a few changes to try to make multiple files data instead.

    After changing the codes for this, page displays 2 hyperlinks to 2 data files. Can someone advise?

    < parameter id = "taskFlowInstId" value = "${"1234567"}" / >

    < parameter id = "datasourceType" value = "${"dsTypeMultiNode"}" / >

    < parameter id = "datasource".

    value = "${" dDocName:REGION_A_DATA_FILE # UCM, UCM #dDocName:REGION_B_DATA_FILE "}" / >

    < parameter id = "Categoriemodele" value = "${" "}" / >

    < parameter id = "templateView" value = "${'template.id'}" / >

    < parameter id = "maxResults" value = "${" "}" / >

    < / Parameter >

    Hello

    Using Portal Builder application or application framework webcenter?

    Please describe your use case. In a team blog showing also several files of content data. Is only change suddenly showing the CMIS query to get the result set.

    Please go under the model code content presenter mentioned in document webcenter to watch the content of multiple.

    http://docs.Oracle.com/CD/E21764_01/WebCenter.1111/e10148/jpsdg_content_presenter.htm#JPSDG7547

    You mentioned page 2 hyperlinks to 2 data files.

    What do you expect as a result?

    When datasourceType = dsTypeMultiNode, datasource defined a set of identifiers separated by commas in the format document node:

    login #dDocName:content_id, connection_name #dDocName:content_id...

    Example: myconn #dDocName:DOCUMENT_ID_12345, myconn #dDocName:DOCUMENT_ID_56789

    Please check.

    If you test it then use another datasourceType.

    for example

    When datasourceType = dsTypeFolderContents, datasource set to a single folder in the form node identifier:

    login #dCollectionID:collection_id

    Example: myconnection.us.oracle.com #dCollectionID:45535

    Adding content task workflow and Document components to a Portal Page - 11g Release 1 (11.1.1.6.2)

    Thank you

    Amey

  • How do I reduce the data file after the moving images to another data file

    All,

    I did a test of simepl to see how should I reduce my data file after the transfer of the objects thereof.

    I had an image in my data file USER_image, which is about 9 GB in size. I created the new data under the new tablpespace file and moved all the images in this new space of tables. So my old tablespace of image with its data file is now empty. At least there is not all user objects.

    Now I tried to resize the old datafile (USER_image) to a smaller value, but I keep this error
    "ORA-03297: file contains data beyond the value required to RESIZE.

    The file size is 9 GB and I tried to resize to 8 GB but it still does not allow me.

    Is it possible to resize a data file after moving all its purpose?
    I'm something wrong or missing something?

    Thanks in advance.

    If you old image tablespace is indeed empty (you need to run a statement against DBA_SEGMENTS to check) then you can bring it, data files included.
    After this, you can create a 'new' tablespace with the old name and desired data file sizes.

  • How can I recover the modules and their data from a failed HARD drive (the lost BONE areas, but data files are readable)?

    The HARD drive that was my OS (Windows XP Pro SP3) failed and lost quite a few areas which are essential for the operating system running. Other data is still readable. A got another HARD drive and installed Windows XP SP2, Firefox and other programs. I was able to retrieve the bookmarks, security certificates, and other profile information using the information found in bandages.

    None of them addressed how do to recover the modules or their data. Specifically, there are several large, elegant scripts that took months to develop and customize.

    Articles related to migration and other do not work for me because they require the old copy of FF is functional, that is not because the OS on this HARD drive is damaged. Is it possible to recover these data, similar (or not) about how I could get the other profile info?

    Have you copied the entire folder C:\Documents and Settings\username \Application Data\Mozilla\Firefox\ on the old drive?
    If this is not the case, can you?
    If so, make a copy and save this folder just in case.

    If so, you could replace this folder on the new facility by \Profiles\ [with your profiles inside] folder and the profiles.ini file [delete all other files / folders that may also be in the folder "Firefox"] -and then replace with the same folder named from the old failed drive. Note that you will lose what you already have with the new installation / profile!
    Your profile folder contains all your personal data and customizations, including looking for plugins, themes, extensions and their data / customizations - but no plugins.
    But if the user logon name is different on the new facility that the former, any extension that uses an absolute path to the file in its prefs will be problems. Easily rectified, by changing the path to the file in the file prefs, js - keep the brake line formatting intact. The extensions created after the era of Firefox 2.0 or 3.0, due to changes in the 'rules' for creating extensions usually are not a problem, but some real old extensions that need only "minor" since that time can still use absolute paths - even though I have not seen myself since Firefox Firefox 3.6 or 4.0.

    View instead of 'Modules' I mentioned the 4 types of 'Modules' separately - Plugins are seen as 'Add-ons', but they are not installed in the profile [except those mislabelled as a "plugin", when they are installed via an XPI file], but rather in the operating system where Firefox 'find' through the registry.

    Note: Migration articles can tell you do not re - use the prefs.js file, due to an issue that I feel is easily fixed with a little inspection and editing. I think you can manage that my perception is that you have a small shovel in your tool box, if you encounter a problem you are able to do a little digging and fixing problems with the paths to files - once you have been warned.

    Overall, if you go Firefox 35 35 or even Firefox 34 to 35, I don't think you will run in all the problems that you can not handle [that I cross my fingers and "hope" that I'm not on what it is obvious].

    With regard to the recovery of the 'data' for individual extensions - there are many ways that extension developers used to store their data and pref. The original way should save in prefs.js or their own file RDF in the profile folder. While Firefox has been developed more, developers started using their own files in the profile folder. And because Mozilla has started using sqlite database files in Firefox 3.0, Mozilla extended their own use of sqlite, as have extension developers.
    Elegant uses the file stylish.sqlite to store 'styles', but something in the back of my mine tells me that 'the index' maybe not in this file with the data. But then again, I can be confusing myself a question I had with GreaseMonkey awhile back where I copied the gm_scripts folder in a new profile and with already installed GreaseMonkey but with no script. These GM scripts worked, but I could not see them or modify them - they do not appear in the user GM extension interface window in Firefox.

  • the tablespace temp in oracle10g data file

    I use oracle10g R2. I voluntarily deleted the data file from the temporary tablespace at the OS level. Then I restart the database. When I restart the database, he started and DB is opened successfully. I went and the server. The temporary file is re-created. Is this something new in oracle10g? I don't think, I see this in oracle9i. I you'd be grateful if someone could clarify this...

    govindts wrote:
    Thank you all... This is my last question. Once I have the answer, then I will mark this as answer

    Is this new avaible only feature for the database that runs RMAN? My database is automatically re-create the temporary files when I restart the database. But my DB recorded with RMAN catalog. It works for any database that is not registered with the RMAN catalog?

    Govind,

    Databases are not registered with RMAN. Its a tool that there always in binary database stack, its up to you to use it or not. Database part of the RMAN catalog or not, also makes no difference in the creation of temporary storage space. This feature is there in 10g databases out of the box. Here is my db, no newspaper archive, no rman catalog configured.

    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Connected to an idle instance.
    
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area  167772160 bytes
    Fixed Size                  1247900 bytes
    Variable Size              75498852 bytes
    Database Buffers           88080384 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    Database opened.
    SQL> archive log list
    Database log mode              No Archive Mode
    Automatic archival             Disabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     41
    Current log sequence           43
    SQL> archive log list
    Database log mode              No Archive Mode
    Automatic archival             Disabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     41
    Current log sequence           43
    SQL> select * from dba_temp_files;
    
    FILE_NAME
    --------------------------------------------------------------------------------
       FILE_ID TABLESPACE_NAME                     BYTES     BLOCKS STATUS
    ---------- ------------------------------ ---------- ---------- ---------
    RELATIVE_FNO AUT   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
    ------------ --- ---------- ---------- ------------ ---------- -----------
    E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEMP01.DBF
             1 TEMP                             20971520       2560 AVAILABLE
               1 YES 3.4360E+10    4194302           80   19922944        2432
    
    SQL> shut immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    E:\Documents and Settings\aristadba>cd E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\
    
    E:\oracle\product\10.2.0\oradata\orcl>rename TEMP01.DBF TEMP01.DBF.old
    
    E:\oracle\product\10.2.0\oradata\orcl>sqlplus / as sysdba
    
    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jun 7 10:26:15 2009
    
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    
    Connected to an idle instance.
    
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area  167772160 bytes
    Fixed Size                  1247900 bytes
    Variable Size              75498852 bytes
    Database Buffers           88080384 bytes
    Redo Buffers                2945024 bytes
    Database mounted.
    Database opened.
    SQL>
    

    And here is the excerpt of the alert log, you can see the message of the temp being created.

    Sun Jun 07 10:26:54 2009
    ALTER DATABASE OPEN
    Sun Jun 07 10:26:54 2009
    Thread 1 opened at log sequence 43
      Current log# 3 seq# 43 mem# 0: E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG
    Successful open of redo thread 1
    Sun Jun 07 10:26:54 2009
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Sun Jun 07 10:26:54 2009
    SMON: enabling cache recovery
    Sun Jun 07 10:26:55 2009
    Successfully onlined Undo Tablespace 1.
    Sun Jun 07 10:26:55 2009
    SMON: enabling tx recovery
    Sun Jun 07 10:26:55 2009
    Re-creating tempfile E:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEMP01.DBF
    Database Characterset is WE8MSWIN1252
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=15, OS id=3828
    Sun Jun 07 10:26:59 2009
    Completed: ALTER DATABASE OPEN
    Sun Jun 07 10:27:00 2009
    db_recovery_file_dest_size of 2048 MB is 0.33% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    

    Update:

    Just to prove that the file has been created, I write the contents of the oradata folder

    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    
    E:\oracle\product\10.2.0\oradata\orcl>ls
    'ls' is not recognized as an internal or external command,
    operable program or batch file.
    
    E:\oracle\product\10.2.0\oradata\orcl>dir
     Volume in drive E has no label.
     Volume Serial Number is 5449-27B9
    
     Directory of E:\oracle\product\10.2.0\oradata\orcl
    
    06/07/2009  10:26 AM              .
    06/07/2009  10:26 AM              ..
    06/07/2009  10:29 AM         7,061,504 CONTROL01.CTL
    06/07/2009  10:29 AM         7,061,504 CONTROL02.CTL
    06/07/2009  10:29 AM         7,061,504 CONTROL03.CTL
    06/07/2009  10:24 AM       104,865,792 EXAMPLE01.DBF
    06/07/2009  10:24 AM        52,429,312 REDO01.LOG
    06/07/2009  10:24 AM        52,429,312 REDO02.LOG
    06/07/2009  10:24 AM        52,429,312 REDO03.LOG
    06/07/2009  10:24 AM       272,637,952 SYSAUX01.DBF
    06/07/2009  10:24 AM       513,810,432 SYSTEM01.DBF
    06/07/2009  10:26 AM        20,979,712 TEMP01.DBF
    06/06/2009  12:59 AM        20,979,712 TEMP01.DBF.old
    06/07/2009  10:24 AM        31,465,472 UNDOTBS01.DBF
    06/07/2009  10:24 AM         5,251,072 USERS01.DBF
                  13 File(s)  1,148,462,592 bytes
                   2 Dir(s)  63,850,295,296 bytes free
    

    HTH
    Aman...

    Published by: Aman... on June 7, 2009 10:29
    added the last section of code.

  • Roll forward standby database with the incremental backup, when a data file is deleted in primary education

    Hello

    I'm nologging operations + deleting some files in the primary and you want to roll forward the day before using the incremental backup Yvert.

    I do in particular, as the files are dropped?

    I got to meet ( Doc ID 1531031.1 ) which explains how to roll forward when a data file is added.

    If I follow the same steps, to make the move to restore the data file newly added, will it work in my case?

    Can someone please clarify?

    Thank you

    San

    I was wondering if reocover noredo is performed before restored controlfile, oracle will apply the incremental backup error-free files, and in this case, what would be the status of the data file in the control file.

    Why do you consider to retrieve the day before first and then in the restaurant of the controlfile will lead to problems. Please read my first post on this thread - I had clearly mentioned that you would not face problems if you go with the method of deployment.

    Here is a demo for you with force logging is disabled. For the first time the day before resuming and restored then the controlfile ensures:

    Primary: oraprim

    Standby: orastb

    Tablespace DataFile of MYTS is removed on primary:

    SYS @ oraprim > select force_logging in the database of v$.

    FORCE_LOGGING

    ---------------------------------------

    NO.

    Currently the tablespace is to have 2 data files.

    SYS @ oraprim > select file_name in dba_data_files where nom_tablespace = 'MYTS;

    FILE_NAME

    -------------------------------------------------------

    /U01/app/Oracle/oradata/oraprim/myts01.dbf

    /U01/app/Oracle/oradata/oraprim/myts02.dbf

    In standby mode, the tablespace is to have 2 data files:

    SYS @ orastb > select name from v$ datafile where ts #= 6;

    NAME

    --------------------------------------------------------------------------------

    /U01/app/Oracle/oradata/orastb/myts01.dbf

    /U01/app/Oracle/oradata/orastb/myts02.dbf

    Postponement of the day before on the primary log shipping

    SYS @ oraprim > alter system set log_archive_dest_state_3 = delay;

    Modified system.

    Dropped 1 MYTS datafile on the primary.

    SYS @ oraprim > alter tablespace myts drop datafile ' / u01/app/oracle/oradata/oraprim/myts02.dbf';

    Tablespace altered.

    Removed some archives to create a space.

    [oracle@ora12c-1 2016_01_05] $ rm - rf * 31 *.

    [oracle@ora12c-1 2016_01_05] $ ls - lrt

    13696 total

    -rw - r - 1 oracle oinstall 10534400 5 January 18:46 o1_mf_1_302_c8qjl3t7_.arc

    -rw - r - 1 oracle oinstall 2714624 5 January 18:47 o1_mf_1_303_c8qjmhpq_.arc

    -rw - r - 1 oracle oinstall 526336 5 January 18:49 o1_mf_1_304_c8qjp7sb_.arc

    -rw - r - 1 oracle oinstall 23552 5 January 18:49 o1_mf_1_305_c8qjpsmh_.arc

    -rw - r - 1 oracle oinstall 53760 5 January 18:50 o1_mf_1_306_c8qjsfqo_.arc

    -rw - r - 1 oracle oinstall 14336 Jan 5 18:51 o1_mf_1_307_c8qjt9rh_.arc

    -rw - r - 1 oracle oinstall 1024 5 January 18:53 o1_mf_1_309_c8qjxt4z_.arc

    -rw - r - 1 oracle oinstall 110592 5 January 18:53 o1_mf_1_308_c8qjxt34_.arc

    [oracle@ora12c-1 2016_01_05] $

    Current main MYTS data files:

    SYS @ oraprim > select file_name in dba_data_files where nom_tablespace = 'MYTS;

    FILE_NAME

    -------------------------------------------------------

    /U01/app/Oracle/oradata/oraprim/myts01.dbf

    Current data of MYTS standby files:

    SYS @ orastb > select name from v$ datafile where ts #= 6;

    NAME

    --------------------------------------------------------------------------------

    /U01/app/Oracle/oradata/orastb/myts01.dbf

    /U01/app/Oracle/oradata/orastb/myts02.dbf

    Gap is created:

    SYS @ orastb > select the process, status, sequence # v$ managed_standby;

    STATUS OF PROCESS SEQUENCE #.

    --------- ------------ ----------

    ARCH. CLOSING 319

    ARCH. CLOSING 311

    CONNECTED ARCH 0

    ARCH. CLOSING 310

    MRP0 WAIT_FOR_GAP 312

    RFS IDLE 0

    RFS IDLE 0

    RFS IDLE 0

    RFS IDLE 320

    9 selected lines.

    Backup incremental RMAN is taken elementary school.

    RMAN > incremental backup of the format of database of SNA 2686263 ' / u02/bkp/%d_inc_%U.bak';

    From backup 5 January 16

    using the control file of the target instead of recovery catalog database

    the DISC 2 channel configuration is ignored

    the DISC 3 channel configuration is ignored

    configuration for DISK 4 channel is ignored

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 41 type device = DISK

    channel ORA_DISK_1: starting full datafile from backup set

    channel ORA_DISK_1: specifying datafile (s) in the backup set

    Enter a number of file datafile = 00001 name=/u01/app/oracle/oradata/oraprim/system01.dbf

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/sysaux01.dbf 00003

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/undotbs01.dbf 00004

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/users01.dbf 00006

    Enter a number of file datafile = name=/u01/app/oracle/oradata/oraprim/myts01.dbf 00057

    channel ORA_DISK_1: starting total, 1-January 5, 16

    channel ORA_DISK_1: finished piece 1-January 5, 16

    piece handle=/u02/bkp/ORAPRIM_inc_42qqkmaq_1_1.bak tag = TAG20160105T190016 comment = NONE

    channel ORA_DISK_1: complete set of backups, time: 00:00:02

    Backup finished on 5 January 16

    Saved controlfile on primary:

    RMAN > backup current controlfile to Eve format ' / u02/bkp/ctl.ctl';

    Cancel recovery in standby mode:

    SYS @ orastb > alter database recover managed standby database cancel;

    Database altered.

    Recover the day before by using the above backup items

    RMAN > recover database noredo;

    From pick up to 5 January 16

    the DISC 2 channel configuration is ignored

    the DISC 3 channel configuration is ignored

    configuration for DISK 4 channel is ignored

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 26 type of device = DISK

    channel ORA_DISK_1: from additional data file from the restore backup set

    channel ORA_DISK_1: specifying datafile (s) to restore from backup set

    destination for the restoration of the data file 00001: /u01/app/oracle/oradata/orastb/system01.dbf

    destination for the restoration of the data file 00003: /u01/app/oracle/oradata/orastb/sysaux01.dbf

    destination for the restoration of the data file 00004: /u01/app/oracle/oradata/orastb/undotbs01.dbf

    destination for the restoration of the data file 00006: /u01/app/oracle/oradata/orastb/users01.dbf

    destination for the restoration of the data file 00057: /u01/app/oracle/oradata/orastb/myts01.dbf

    channel ORA_DISK_1: backup /u02/bkp/ORAPRIM_inc_3uqqkma0_1_1.bak piece reading

    channel ORA_DISK_1: room handle=/u02/bkp/ORAPRIM_inc_3uqqkma0_1_1.bak tag = TAG20160105T190016

    channel ORA_DISK_1: restored the backup part 1

    channel ORA_DISK_1: restore complete, duration: 00:00:01

    Finished recover to 5 January 16

    Restored the controlfile and mounted the day before:

    RMAN > shutdown immediate

    dismounted database

    Instance Oracle to close

    RMAN > startup nomount

    connected to the database target (not started)

    Oracle instance started

    Total System Global Area 939495424 bytes

    Bytes of size 2295080 fixed

    348130008 variable size bytes

    583008256 of database buffers bytes

    Redo buffers 6062080 bytes

    RMAN > restore controlfile eve of ' / u02/ctl.ctl ';

    From 5 January 16 restore

    allocated channel: ORA_DISK_1

    channel ORA_DISK_1: SID = 20 type of device = DISK

    channel ORA_DISK_1: restore the control file

    channel ORA_DISK_1: restore complete, duration: 00:00:01

    output file name=/u01/app/oracle/oradata/orastb/control01.ctl

    output file name=/u01/app/oracle/fast_recovery_area/orastb/control02.ctl

    Finished restore at 5 January 16

    RMAN > change the editing of the database;

    Statement processed

    output channel: ORA_DISK_1

    Now the data file does not exist on the standby mode:

    SYS @ orastb > alter database recover managed standby database disconnect;

    Database altered.

    SYS @ orastb > select the process, status, sequence # v$ managed_standby;

    STATUS OF PROCESS SEQUENCE #.

    --------- ------------ ----------

    CONNECTED ARCH 0

    CONNECTED ARCH 0

    CONNECTED ARCH 0

    ARCH. CLOSING 329

    RFS IDLE 0

    RFS IDLE 330

    RFS IDLE 0

    MRP0 APPLYING_LOG 330

    8 selected lines.

    SYS @ orastb > select name from v$ datafile where ts #= 6;

    NAME

    --------------------------------------------------------------------------------

    /U01/app/Oracle/oradata/orastb/myts01.dbf

    Hope that gives you a clear picture. You can use this to roll forward day before using the SNA roll forward Eve physical database using RMAN incremental backup | Shivananda Rao

    -Jonathan Rolland

  • Find the time to restore a data file

    RDBMS version: 11.2.0.4

    Platform: Oracle Linux 6.4

    To test our backup RMAN Tape (Netbackup), we have created a table called BACKUPTEST with a single space data file.

    Then, we have removed the data file. Then was restored and recovered with RMAN's data file and tablespace was again online.

    I wanted to show the evidence to my manager that the data file has been restored. But v$ datafile. CREATION_TIME will show the time when the tablespace is imagined. Is there another way, I could find the time when the data file has been restored other than log RMAN.

    Below is an excerpt after the data file has been restored.

    $ sqlplus / as sysdba

    SQL * more: Production of the version 11.2.0.4.0 Fri dec 19 15:08:03 2013

    Copyright (c) 1982, 2013, Oracle.  All rights reserved.

    Connected to:

    Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    With partitioning, Real Application Clusters, Automatic Storage Management, OLAP,.

    Options of Data Mining and Real Application Testing

    SQL > select TABLESPACE_NAME, status of dba_tablespaces where nom_tablespace = 'BACKUPTEST ';

    STATUS TABLESPACE_NAME

    ------------------------------ ---------

    BACKUPTEST ONLINE

    SQL > alter session set nls_date_format = "DD-MM-YYYY HH24."

    Modified session.

    SQL > select CREATION_TIME of v$ datafile where FILE # = 78;

    CREATION_TIME

    ----------------

    19/12/2013-12:16

    SQL > select CREATION_TIME, last_time v$ datafile where FILE # = 78;

    CREATION_TIME LAST_TIME

    ---------------- ----------------

    19/12/2013-12:16

    SQL >

    19/12/2013 12:16 is the hour when the tablespace has been created with this data file no time, it has been restored from a backup RMAN.

    Check the alert.log.  There will be messages about the RESTORATION and RECOVERY actions.

    Hemant K Collette

  • Reduction of size of the Tablespace SYSTEM or adding data file?

    Hello

    I read the MOS notes on aud$ table, etc and the purge it (truncation). But what happened in this situation. Please take a look. # Oracle Database 11 g Release 11.2.0.2.0 - 64 bit Production
     ## size in MB
    TS Name                                 Total_size   Free space     %age_free    %age_free
    
    SYSTEM                               2000       409.875              20         80
    SYS@AP AS SYSDBA> select owner, segment_name, segment_type, bytes/1024/1024 "MB" from dba_segments
      2  where tablespace_name = 'SYSTEM' AND rownum <=20 AND bytes/1024/1024 > 1 order by bytes desc;
    
    OWNER                      SEGMENT_NAME                                              SEGMENT_TYPE              MB
    ------------------------------ --------------------------------------------------------------------------------- ------------------ ----------
    SYS                      OBJ$                                                   TABLE                   25
    SYS                      I_OBJ2                                                   INDEX                   22
    SYS                      I_OBJ5                                                   INDEX                   22
    SYS                      DEPENDENCY$                                              TABLE                   15
    SYS                      C_OBJ#                                                   CLUSTER              14
    SYS                      I_OBJ1                                                   INDEX                   11
    SYS                      I_OBJ4                                                   INDEX                   11
    SYS                      I_SYN2                                                   INDEX                   10
    SYS                      SYN$                                                   TABLE                    8
    # System Tablespace has a data file:
    /Oracle/oradata/AP/datafile/o1_mf_system_6l9549kc_.dbf

    What are the possibilities for me here, please? Can I purge/truncate above segments, or add a data file?

    Thank you

    Concerning

    ALTER database... datafile autoextend on next maxsize 64M

    So yes, you can change this. Easily!

    --------
    Sybrand Bakker
    Senior Oracle DBA

  • Calculation of the free space in a data file

    Hello

    I use Oracle database 11g (11.1.0.6). I need to calculate available disk space that exist in a special database for inserting new records. I look in the table dba_data_files for this information. I use the sub query to get the value of maxsize allocated in GB this until datafile that it can grow.

    Select maxbytes/1024/1024/1024 of dba_data_files where nom_tablespace = 'XXXX '.

    How to get the disk space available for the insertion of new records in a particular data file or a bunch of data files in a tablespace?


    Kind regards

    007

    007 wrote:
    Hi Helios - Gunes EROL,

    The Select sum (bytes) / 1024/1024/1024 "GB" of dba_data_files where nom_tablespace = 'xx' query tells you the sum of all files of size data in a tablespace. But I need the free space available in these data files. How to get it?

    Kind regards

    007

    Use

    select sum(bytes/1024/1024/1024) GB from dba_free_space where tablespace_name='tablespace name'
    

    Know schemae storage space

    select default_tablespace from dba_users where username='schema name';
    

    Know the perticular table tablespace

    select TABLESPACE_NAME from dba_tables where table_name='table name';
    

    Utimetly you will be insert in tablespace, little matter how many data file you have and how many girl and how many of them have the space available.
    So just to worry when the table space is saturated.

  • Check the log to delete oracle data file to be deleted from the root user.

    Any body can help me find the log to make sure all traces of the data files that are deleted from the root to the hp - ux Server user (and the sys log has already been modified by the root user).
    So is there a way to check the database or deleted from server level to check track of log files data files.

    Salvation;

    If, you don't have no OS level verification that it is difficult to find to answer your quesiton. For example, our system of verification on OS (also root password no case) and Db level.

    PS: @Sybrand thanks, I missed that part.

    Respect of
    HELIOS

  • The maximum size of a data file

    What is the maximum size that we can give to a data file?

    user8850066 wrote:
    What is the maximum size that we can give a small file tablespace?

    Click on the following link:
    Re: Size of data files Max

  • The high water table and data files

    Hi all

    I just have a small question regarding high Tables in correlationi with the HWM to a data file. I have a data file that I would like to resize. At this point, I can reduce the data file of 100 GB to 70 GB which is the HWM of the data file. However, I know that I can compress probably some tables in the tablespace and assuming that these tables have extensions the datafile, will be running 'alter table shrink space,' also decrease the HWM of the data file? Or is there no correlation between the 2 HWM?

    Thank you.

    Note You can use alter table move and change reconstructions of index to move generally the scope used to the logical beginning of the data files. How good Oracle repacks extensions dependent measure used tablespace management and the size and order of executed moves/reconstruction. Retractable table option allocates no new extensions so it won't move that objects forward in the file.

    But again, there is very little point in trying to pack each scope to the front of the file if you cause Oracle immediately need to expand the file to store the new allowance claims. Existing free extensions in the logical HWM of the file will be reused when possible with managed tablespaces locally packaging 100% is not necessary.

    If the dictionary is indeed this reuse is more variable, but based on your previous posts, there is no need to worry about this situation.

    HTH - Mark D Powell.

  • problem with the creation of my usb data card connect to a wifi hotspot network!

    Hello, I have g62005ax machine of hp pavilion and the use of windows 7 home basic service 1 since I bought it, I map of 3g data through which I use internet on my laptop, recently I had a phone android romaric mi3 and wanted to use the net to my data card 3 g on phone via my laptop by creating my cell phone as a wifi hotspot I tried connectify and virtual wifi router and mhotspot, enable me to create a hotspot wifi successfully, my phone can identify them and even be able to connect to the wifi network but I can not surf or download in the phone no matter what app or browser, phone says connected wifi network and phone receives the data as 0.15 0.58, 0.98 KB. My card data speeds work fine on pc on 1 MB/s with no problems, but what is wrong I am doing even after the connection is established?  As I said I tried different pc applications, given that my phone might have problem then I tried another star of the Galaxy samsund phone pro, it also connects to the wifi network but the same problem, IE. No surf, the so problem is not end of my phone, what am I doing wrong here?

    Hello IRON-MAN.

    Thanks for posting on the HP Forums!

    I understand the hotspot of data card USB that you create from your laptop is not working properly. I'll do what I can to help out you! Please follow this guide to make sure that you follow the correct procedures: How to create a WiFi HotSpot with your connection Internet/card USB Data...

    Please let me know your results. If you're still having problems, please provide some screen shots so I have something to look at. Thank you and have a great day!

    Mario

  • Issue in the creation of the workload file.

    Hello

    I am creating a simple loading rules file using the Parent child in Sample.Basic method.

    I do it for the Product dimension.
    I loaded the file but when I go in "+ Dimension Build Settings-> Build Dimension settings + ' and set the values below:
    Dimension: product
    Build, method: use of parent/child references

    the parameter below automatically changes.
    Dimension: year.

    Rest of the values are saved as I put in them, including field properties. No idea why it is changing in 'year '.

    When I click on 'Validate', he throws the error/warning:
    "+ There is a unknown member (or members) in the name field.

    Any help is very appreciated and let me know if you need any other details on my side.

    Thank you
    UNI

    Read SER60 on how to create rules files. It's a good starting point.

    Have you changed the dimension generation rule? (I think that is where you have the problem)

    Change the setting of data generation load low intensity build.

    Concerning

    Celvin

    http://www.orahyplabs.com

Maybe you are looking for