Import of differnet schema table using the data pump in
Hi allI try to export a table to a diagram MIPS_MDM and import it in the MIPS_QUIRINO schema in the different database. I get the below error.
expdp MIPS_MDM/MIPS_MDM@ext01mdm tables = QUARANTINE directory = DP_EXP_DIR dumpfile = quarantine.dmp logfile = qunat.log
To import
Impdp MIPS_QUIRINO/MIPS_QUIRINO@mps01dev tables = QUARANTINE directory = DP_EXP_DIR dumpfile = quarantine.dmp logfile = impd.log
Please can I know what is the exact syntax to import in the scheme of differnet.
Thank you.
http://download.Oracle.com/docs/CD/E11882_01/server.112/e16536/dp_import.htm#BEHFIEIH
Tags: Database
Similar Questions
-
Moving all the newspapers and Materialized View at the schema level using the data pump in
Hi Experts,
Please help me on how I can exp/imp all materialized views andMV logs (as are some MVs) only the full scheme of other databases. I want to exclude everything else.
Concerning
-Samar-Using DBMS_METADATA. Create the following SQL script:
SET FEEDBACK OFF SET SERVEROUTPUT ON FORMAT WORD_WRAPPED SET TERMOUT OFF SPOOL C:\TEMP\MVIEW.SQL DECLARE CURSOR V_MLOG_CUR IS SELECT DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW_LOG',LOG_TABLE) DDL FROM USER_MVIEW_LOGS; CURSOR V_MVIEW_CUR IS SELECT DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW',MVIEW_NAME) DDL FROM USER_MVIEWS; BEGIN DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',TRUE); FOR V_REC IN V_MLOG_CUR LOOP DBMS_OUTPUT.PUT_LINE(V_REC.DDL); END LOOP; FOR V_REC IN V_MVIEW_CUR LOOP DBMS_OUTPUT.PUT_LINE(V_REC.DDL); END LOOP; END; / SPOOL OFF
In my case the script is saved as C:\TEMP\MVIEW_GEN. SQL. Now I will create a journal mview and mview in schema SCOTT and run the script above:
SQL> CREATE MATERIALIZED VIEW LOG ON EMP 2 / Materialized view log created. SQL> CREATE MATERIALIZED VIEW EMP_MV 2 AS SELECT * FROM EMP 3 / Materialized view created. SQL> @C:\TEMP\MVIEW_GEN SQL>
Run the C:\TEMP\MVIEW_GEN script. SQL generated a C:\TEMP\MVIEW queue. SQL:
CREATE MATERIALIZED VIEW LOG ON "SCOTT"."EMP" PCTFREE 10 PCTUSED 30 INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" WITH PRIMARY KEY EXCLUDING NEW VALUES; CREATE MATERIALIZED VIEW "SCOTT"."EMP_MV" ("EMPNO", "ENAME", "JOB", "MGR", "HIREDATE", "SAL", "COMM", "DEPTNO") ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" BUILD IMMEDIATE USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" REFRESH FORCE ON DEMAND WITH PRIMARY KEY USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS DISABLE QUERY REWRITE AS SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "EMP" "EMP";
Now, you can run this on the database. You may need to adjust the tablespace and storage clauses. Or you can add more DBMS_METADATA. SET_TRANSFORM_PARAM calls to C:\TEMP\MVIEW_GEN. SQL to force DBMS_METADATA not to include the tablespace or / and the terms of storage.
SY.
-
migration from 10g to 12 c using the data pump in
Hi, while I used the data pump at the level of the schema before, I'm relatively new to the full database import.
We are trying a full database migration to 10.2.0.4 to 12 c using the complete method of database data pump over db link.
the DBA has indicated to avoid move SYSAUX and SYSTEM objects. but initially during the documentation review, it appeared that these objects are not exported since the TRANSPORTABLE given target = system NEVER. If anyone can confirm this? done import and export log refers to the objects I thought would not:
...
19:41:11.684 23 FEBRUARY 15: Estimated TABLE_DATA 3718 objects in 77 seconds 19:41:12.450 23 February 15: total estimation using BLOCKS method: 52,93 GB
19:41:14.058 23 February 15: object DATABASE_EXPORT/TABLESPACE of treatment type
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'UNDOTBS1' already exists
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'SYSAUX' already exists
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'TEMP' already exists
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'USERS' existing
20:10:33.200 23 FEBRUARY 15: 96 objects TABLESPACE finished in 1759 seconds 20:10:33.208 23 February 15: treatment of type of object DATABASE_EXPORT/PROFILE
20:10:33.445 23 FEBRUARY 15: 7 PROFILE items finished in 1 seconds 20:10:33.453 23 February 15: treatment of DATABASE_EXPORT/SYS_USER/USER object type
20:10:33.842 23 FEBRUARY 15: 1 USER objects ended in 0 seconds 20:10:33.852 23 February 15: treatment of DATABASE_EXPORT/SCHEMA/USER object type
20:10:52.368 23 February 15: ORA-31684: USER object type: 'OUTLN' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'ANONYMOUS' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'OLAPSYS' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'MDDATA' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'SCOTT' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'LLTEST' already exists
20:10:52.372 23 FEBRUARY 15: Finished objects USER 1140 in 19 seconds 20:10:52.375 23 February 15: object DATABASE_EXPORT/ROLE of treatment type
20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'SELECT_CATALOG_ROLE' already exists
20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'EXECUTE_CATALOG_ROLE' already exists
20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'DELETE_CATALOG_ROLE' already exists
20:10:55.256 23 February 15: ORA-31684: object ROLE type: 'RECOVERY_CATALOG_OWNER' already exists
...
the most insight.
The schema SYS, CTXSYS and MDSYS ORDSYS are not exported using exp/expdp
DOC - ID: Note: 228482.1
I guess that he has already installed a 12 c software and created an itseems database - so when you have imported you have this "already exists."
Every time the database is created and the software installed by default system, sys, sysaux will be created.
-
Extracting data from table using the date condition
Hello
I have a table structure and data as below.
create table of production
(
IPC VARCHAR2 (200),
PRODUCTIONDATE VARCHAR2 (200),
QUANTITY VARCHAR2 (2000).
PRODUCTIONCODE VARCHAR2 (2000).
MOULDQUANTITY VARCHAR2 (2000));
Insert into production
values ('1111 ', '20121119', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121122', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121126', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121127', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121128', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121201', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121203', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20121203', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20130103', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20130104', ' 1023', 'AAB77',' 0002');
Insert into production
values ('1111 ', '20130105', ' 1023', 'AAB77',' 0002');
Now, here I want to extract the data with condition as
PRODUCTIONDATE > = the current week Monday
so I would skip only two first rows and will need to get all the lines.
I tried to use it under condition, but it would not give the data for the values of 2013.
TO_NUMBER (to_char (to_date (PRODUCTIONDATE, 'yyyymmdd'), 'IW')) > = to_number (to_char (sysdate, 'IW'))
Any help would be appreciated.
Thank you
MaheshHello
HM wrote:
by the way: it is generally a good idea to store date values in date columns.One of the many reasons why store date information in VARCHAR2 columns (especially VARCHAR2 (200)) is a bad idea, it's that the data invalid get there, causing errors. Avoid the conversion of columns like that at times, if possible:
SELECT * FROM production WHERE productiondate >= TO_CHAR ( TRUNC (SYSDATE, 'IW') , 'YYYYMMDD' ) ;
-
use the data pump FLASHBACK_SCN
Hey,.
I try to export a schema, as below,
expdp------"/ ACE sysdba\ ' DIRECTORY = data_pump_dir1 DUMPFILE = xxx.dmp LOGFILE = data_pump_dir1:exp_xxxx.log = FLASHBACK_SCN = xxxxx xxx DRAWINGS
so first, I need to get a current database of YVERT.
SQL > select flashback_on, current_scn from v$ database;
FLASHBACK_ON CURRENT_SCN
------------------ -----------
YES 7.3776E + 10
SQL > select to_char (7.3776E + 10, '9999999999999999999') of double;
TO_CHAR (7.3776E + 10',
--------------------
73776000000
or
SQL > SELECT to_char (DBMS_FLASHBACK. GET_SYSTEM_CHANGE_NUMBER, ' 9999999999999999999') twice;
TO_CHAR (DBMS_FLASHBA
--------------------
73775942383
Assume they are all the current SCN for the database number... so I do not understand why these two numbers are different (73776000000 and 73775942383)?
should what number I use to export current database dump?
Thank you very much!!!
Published by: 995137 on May 30, 2013 08:25Hello
You can use one, is it from the difference because tehre's time diffenence between requests, you can check with
column scn format 9999999999999; select current_scn scn from v$database union select dbms_flashback.get_system_change_number scn from dual;
HTH
-
Inserting data in several related tables using the database
Hello world
I'm working on a BPM application using Oracle BPM 11.1.1.5.0 and JDeveloper 11.1.1.5.0.
In my database, I have two tables, loan and guarantee that are related by a field named employeeID (PK on loan) and FK in warranty.
Each line can have several lines of guarantee.
At this point, I'm doing an entry form for the user to insert data in the two tables.
I did successfully before with a single table that has no relations.
The way I'm doing here is, after the creation of the database successfully adapter, a type of LoanCollection is created in the types module, which can be used to create business objects and data objects of.
The problem is when I create an object of type loanCollection process data and then create a UI generated automatically on that basis, only the fields in the primary table (the Table of loan) appear in the form.
On the other hand, if I create a business object based on the LoanSchema, the form for all of the two tables is created automatically (the loan as a form, the guarantee in a table), but then, when I try to access it in the section processing service mission which calls the database adapter, I have no access to such.
In fact, the only type which can be used in the service task is the process based on the loanCollection data object.
To summorize, I have to use the type of business for my UI object to include all the fields in both tables, so I have to use the data object from the collection process in the transformation of service task dialog box.
And I can't find a way to map to another.
Can someone help me with this please?
Thank you very muchTry to follow these steps.
1. create a new module in your catalogue our BPM project management section
2. in this new module create 3 Business Objects - (LoanBusinessObject, GuaranteeBusinessObject and GuaranteeArrayBusinessObject)
3. Add the attributes appropriate to the LoanBusinessObject and the GuaranteeBusinessObject so that they mimic your database tables, then to the GuaranteeArrayBusinessObject add an array of type attribute GuraranteeBusinessObject
4. now you need to create two process data objects, type loanProcessObject LoanBusinessObject and type guaranteesProcessObject GuaranteeArrayBusinessObject
5. as inputs to your human task adds the loanProcessObject and guaranteesProcessObject, these should now be available in your data controls and can be used to auto generate the form
6. in your dbadapter you'll then use XSL Transformation and use for each so that it will write the data to the ready table and all the line items of warranty for the warranty table. -
Can I use the data dictionary tables based on RLS policy?
Hello guys, I use the package level security line to limit certain lines to some users.
I created several roles, I want to just enable certain roles to see all the columns, but the other roles, I'm not that they see all the lines. I mean to do this I use the session_roles table data dictionary however it did not work.
What to do in order to not allow rows of user roles?
Can I use the data dictionary tables in RLS?
Thank you very much.Polat says:
What to do in order to not allow rows of user roles?
Can I use the data dictionary tables in RLS?Ensure that:
SQL> CREATE OR REPLACE 2 FUNCTION no_sal_access( 3 p_owner IN VARCHAR2, 4 p_name IN VARCHAR2 5 ) 6 RETURN VARCHAR2 AS 7 BEGIN 8 RETURN '''NO_SAL_ACCESS'' NOT IN (SELECT * FROM SESSION_ROLES)'; 9 END; 10 / Function created. SQL> BEGIN 2 DBMS_RLS.ADD_POLICY ( 3 object_schema => 'scott', 4 object_name => 'emp', 5 policy_name => 'no_sal_access', 6 function_schema => 'scott', 7 policy_function => 'no_sal_access', 8 policy_type => DBMS_RLS.STATIC, 9 sec_relevant_cols => 'sal', 10 sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS); 11 END; 12 / PL/SQL procedure successfully completed. SQL> GRANT EXECUTE ON no_sal_access TO PUBLIC 2 / Grant succeeded. SQL> CREATE ROLE NO_SAL_ACCESS 2 / Role created. SQL> GRANT SELECT ON EMP TO U1 2 / Grant succeeded. SQL> CONNECT u1@orcl/u1 Connected. SQL> select ename,sal FROM scott.emp 2 / ENAME SAL ---------- ---------- SMITH 800 ALLEN 1600 WARD 1250 JONES 2975 MARTIN 1250 BLAKE 2850 CLARK 2450 SCOTT 3000 KING 5000 TURNER 1500 ADAMS 1100 ENAME SAL ---------- ---------- JAMES 950 FORD 3000 MILLER 1300 14 rows selected. SQL> connect scott@orcl Enter password: ***** Connected. SQL> GRANT NO_SAL_ACCESS TO U1 2 / Grant succeeded. SQL> connect u1@orcl/u1 Connected. SQL> select ename,sal FROM scott.emp 2 / ENAME SAL ---------- ---------- SMITH ALLEN WARD JONES MARTIN BLAKE CLARK SCOTT KING TURNER ADAMS ENAME SAL ---------- ---------- JAMES FORD MILLER 14 rows selected. SQL>
SY.
-
Follow the progress of the import of the data pump network?
I'm on Oracle 10.2.0.4 (SunOS) and execution of an import of network data pump from a list of tables in a schema.
I see that the following views are available to track the data pump tasks:
DBA_DATAPUMP_JOBS - a list and the status of the data pump jobs running
DBA_DATAPUMP_SESSIONS - list of user sessions attached to each data pump task (which may be attached to the session of v$)
DBA_RESUMABLE - See labor being imported and its status
V$ SESSION_LONGOPS - can see the total size of imports and the elapsed time for work of pump data if they run long enough
What other options are available for the increase in imports of network monitoring?
Also, is it possible to see at which table is being processed for a network of several tables import?That would have helped. :^)
When you run a job, if you do not specify a task name, then it will generate one for you. In your case, I don't see a name of the specified job, it seems that one would be generated. The generated name looks like:
SYS_IMPORT_FULL_01 if a plenty to do
SYS_IMPORT_SCHEMA_01 if make a diagram
SYS_IMPORT_TABLE_01 if a table
SYS_IMPROT_TRANSPORTABLE_01 if you are using the transportable tablespace.01 is the first array to create. If there is a second job while this job is currently running, it will be 02. The number can also be moved if the job fails, and is not cleaned. In your case, you make a table, so the default task name would be something like:
SYS_IMPORT_TABLE_01 and lets say you ran this with the diagram of the SYSTEM.
In this case, you can run this command:
Impdp system/password attach = system.sys_import_table_01
This will bring you to the datapump prompt, where you can type in the location or status 10, etc..
Dean
-
Get the 500 error trying to create a table using the REST API
Hello
I tried to create a table using the REST API for Business Intelligence Cloud, but I got 500 Internal Server Error for a while now.
Here are the details that I use to create a table.
and the json to create the schema that I use is
[{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [18], 'columnName': ["ROWID"]}]
, {'Nullable': [true], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [18], 'columnName': ['RELATIONID']},
{'Nullable': [true], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [18], 'columnName': ['ID']}
, {'Nullable': [true], 'defaultValue': 'dataType' [null],: ['TIMESTAMP'], 'precision': [0], 'length': [0], 'columnName': ['RESPONDEDDATE']},
{'Nullable': [true], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [255], 'columnName': ['RESPONSE']},
{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['TIMESTAMP'], 'precision': [0], 'length': [0], 'columnName': ['SYS_CREATEDDATE']},
{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [18], 'columnName': ['SYS_CREATEDBYID']},
{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['TIMESTAMP'], 'precision': [0], 'length': [0], 'columnName': ['SYS_LASTMODIFIEDDATE']},
{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [18], 'columnName': ['SYS_LASTMODIFIEDBYID']},
{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['TIMESTAMP'], 'precision': [0], 'length': [0], 'columnName': ['SYS_SYSTEMMODSTAMP']},
{'Nullable': [false], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [10], 'columnName': ['SYS_ISDELETED']},
[{'Nullable': [true], 'defaultValue': 'dataType' [null],: ['VARCHAR'], 'precision': [0], 'length': [50], 'columnName': ['TYPE']}]
I tried this using postman and code, but I always get the following response error:
Error 500 - Internal server error
Of RFC 2068 Hypertext Transfer Protocol - HTTP/1.1:
10.5.1 500 internal Server Error
The server encountered an unexpected condition which prevented him from meeting the demand.
I am able to 'get' existing table schemas, delete the tables, but I'm not able to make put them and post operations. Can someone help me to identify the problem, if there is no fault in my approach.
Thank you
Romaric
I managed to create a table successfully using the API - the only thing I see in your JSON which is different from mine is that you have square brackets around your values JSON where I have not. Here is my CURL request and extract my JSON file (named createtable.txt in the same directory as my CURL executable):
curl u [email protected]: password UPDATED h x ' X-ID-TENANT-NAME: tenantname ' h ' Content-Type: application/json '-binary data @createtable.txt https://businessintell-tenantname.analytics.us2.oraclecloud.com/dataload/v1/tables/TABLE_TO_CREATE k
[
{
'columnName': 'ID',
'dataType': 'DECIMAL ',.
'Length': 20,.
"accuracy": 0.
'Nullable': false
},
{
'columnName': 'NAME',
'dataType': 'VARCHAR ',.
'Length': 20,.
"accuracy": 0.
'Nullable': true
},
{
"columnName': 'STATUS."
'dataType': 'VARCHAR ',.
'Length': 20,.
"accuracy": 0.
'Nullable': true
},
{
"columnName': 'CREATED_DATE."
'dataType': 'TIMESTAMP '.
'Length': 20,.
"accuracy": 0.
'Nullable': true
},
{
'columnName': 'UPDATED_DATE ',.
'dataType': 'TIMESTAMP '.
'Length': 20,.
"accuracy": 0.
'Nullable': true
}
]
-
In my development environment, when I run the production server, I get the following error in the server.log. I have configured the data source for the production. I use ATG 10.1. Please help solve this problem. Here are the logs
22:46:45, 765 full repository INFO [ProfileAdapterRepository] SQL boot
22:46:45, 988 full repository INFO [AdminSqlRepository] SQL boot
22:46:45, 990 Initializing INFO [AdminAccountInitializer] account database/atg/dynamo/security/AdminAccountManager of/atg/dynamo/security/SimpleXmlUserAuthority
22:46:46, 450 ERROR [ProfileAdapterRepository] Table 'dbc_user' in the descriptor of the item: 'user' does not exist in a space of accessible table by the data source. DatabaseMetaData.getColumns returns without columns. Catalog = null Schema = null
22:46:46, 466 WARN [ProfileAdapterRepository] atg.adapter.gsa.GSARepository-> loadColumnInfo: SQLException in Table.loadColumnInfo. Try again.
com MySQL.JDBC.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'atgcoredb.dbc_user' does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:407)
at com.mysql.jdbc.Util.getInstance(Util.java:382)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3603)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3535)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1989)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2150)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2620)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2570)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1476)
to com.mysql.jdbc.DatabaseMetaData$ 7.forEach(DatabaseMetaData.java:3966)
at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:51)
at com.mysql.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3950)
at atg.adapter.gsa.Table.loadColumnInfo(Table.java:1926)
at atg.adapter.gsa.GSARepository.loadColumnInfos(GSARepository.java:7534)
to atg.adapter.gsa.GSARepository$ 1.run(GSARepository.java:5431)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
to java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
22:46:46, 467 WARN [ProfileAdapterRepository] unknown JDBC type for property: businessAddress, element type: user. Make sure that the column names in the database and match model. The business_addr column is not found in the set of columns returned by the database: {} for this table.
22:46:46, 470 ERROR [ProfileAdapterRepository] Table 'dbc_buyer_billing' in the descriptor of the item: 'user' does not exist in a space of accessible table by the data source. DatabaseMetaData.getColumns returns without columns. Catalog = null Schema = null
22:46:46, 470 WARN [ProfileAdapterRepository] atg.adapter.gsa.GSARepository-> loadColumnInfo: SQLException in Table.loadColumnInfo. Try again.
com MySQL.JDBC.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'atgcoredb.dbc_buyer_billing' does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:407)
at com.mysql.jdbc.Util.getInstance(Util.java:382)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3603)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3535)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1989)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2150)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2620)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2570)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1476)
to com.mysql.jdbc.DatabaseMetaData$ 7.forEach(DatabaseMetaData.java:3966)
at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:51)
at com.mysql.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3950)
at atg.adapter.gsa.Table.loadColumnInfo(Table.java:1926)
at atg.adapter.gsa.GSARepository.loadColumnInfos(GSARepository.java:7534)
to atg.adapter.gsa.GSARepository$ 1.run(GSARepository.java:5431)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
to java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
22:46:46, 471 WARN [ProfileAdapterRepository] unknown JDBC type for the property: myBillingAddrs, element type: user. Make sure that the column names in the database and match model. The addr_id column is not found in the set of columns returned by the database: {} for this table.
22:46:46, ERROR 611 [ProfileAdapterRepository] Table 'dbc_org_billing' in the descriptor of the item: "organization" does not exist in a space of accessible table by the data source. DatabaseMetaData.getColumns returns without columns. Catalog = null Schema = null
22:46:46, 611 WARN [ProfileAdapterRepository] atg.adapter.gsa.GSARepository-> loadColumnInfo: SQLException in Table.loadColumnInfo. Try again.
com MySQL.JDBC.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'atgcoredb.dbc_org_billing' does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:407)
at com.mysql.jdbc.Util.getInstance(Util.java:382)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3603)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3535)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1989)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2150)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2620)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2570)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1476)
to com.mysql.jdbc.DatabaseMetaData$ 7.forEach(DatabaseMetaData.java:3966)
at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:51)
at com.mysql.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3950)
at atg.adapter.gsa.Table.loadColumnInfo(Table.java:1926)
at atg.adapter.gsa.GSARepository.loadColumnInfos(GSARepository.java:7534)
to atg.adapter.gsa.GSARepository$ 1.run(GSARepository.java:5431)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
to java.util.concurrent.ThreadPoolExecutor$ Worker.run (ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
22:46:46, 612 WARN [ProfileAdapterRepository] unknown JDBC type for the property: myBillingAddrs, element type: Organization. Make sure that the column names in the database and match model. The addr_id column is not found in the set of columns returned by the database: {} for this table.
You want to run the B2BCommerce module? If so, then you must run the $dynamo_home /... / B2BCommerce/SQL/db_components/MySQL/b2b_user_ddl. SQL ddl because you are missing the B2BCommerce tables. If this isn't the case, you will have to redo without the B2BCommerce module.
Thank you
Joe
-
ListView xml by using the data source does not?
Hello
When I use the data for loading XML source, listview displays data only if there is at least 2 element in the XML file.
import bb.cascades 1.0 import bb.data 1.0 NavigationPane { id: nav Page { id: emp titleBar: TitleBar { visibility: ChromeVisibility.Visible } onCreationCompleted: { dataSource1.load(); //load the xml when page is created } actions: [ ActionItem { title: qsTr("Create List") ActionBar.placement: ActionBarPlacement.OnBar onTriggered: { dialog.open(); } } ] Container { topPadding: 30.0 leftPadding: 20.0 rightPadding: 20.0 ListView { id:list1 dataModel:dataModel listItemComponents: [ ListItemComponent { StandardListItem { title: { qsTr(ListItemData.name) } } } ] } } } //page attachedObjects: [ GroupDataModel { id:dataModel }, DataSource { id: dataSource1 source: "models/employeelist.xml" query: "/root/employee" type: DataSourceType.Xml onDataLoaded: { dataModel.clear(); dataModel.insertList(data); } }, Dialog { id: dialog Container { background: Color.Gray layout: StackLayout { } verticalAlignment: VerticalAlignment.Center horizontalAlignment: HorizontalAlignment.Center preferredWidth: 700.0 leftPadding: 20.0 rightPadding: 20.0 topPadding: 20.0 bottomPadding: 20.0 Container { background: Color.White horizontalAlignment: HorizontalAlignment.Center preferredWidth: 700.0 preferredHeight: 50.0 Label { text: "Add Employee List" textStyle.base: SystemDefaults.TextStyles.TitleText textStyle.color: Color.DarkBlue horizontalAlignment: HorizontalAlignment.Center textStyle.fontSizeValue: 4.0 } } Container { topPadding: 30.0 layout: StackLayout { orientation: LayoutOrientation.LeftToRight } Label { text: "Employee Name " } TextField { id:nametxt } } Container { topPadding: 30.0 layout: StackLayout { orientation: LayoutOrientation.LeftToRight } Button { text: "OK" onClicked: { var name=nametxt.text; if(nametxt.text=="") { _model.toastinQml("Please enter a name"); } else { _model.writeEmployeeName(name); //writing name to the employeelist.xml nametxt.text=""; dialog.close(); dataSource1.load(); //loading the xml } } preferredWidth: 300.0 } Button { text: "Cancel" onClicked: { dialog.close(); } preferredWidth: 300.0 } } } } ] }//navigation
When I add a name to the first time to the XML, the list shows nothing. Then, when I add a new name, it displays the list.
Why is it so? Is there a any mistake I made?
Help, please!
Thanks in advance
Diakite
It seems that there is a problem reported on the DIT that was refitted with internal BlackBerry MKS defect tracking system. Until this issue is reviewed by our internal teams, please use the solution suggested by the Rapporteur for the question by introducing an "if" statement before inserting data to the DataModel:
if (data.name) { empDataModel.insert(data); } else { empDataModel.insertList(data); }
-
Update a table using the clause
Hello
I want to update a table using the selected values.
Cases in the sample:
create table as empsalary)
Select 1 as empid, 0 in the wages of all the double union
Select option 2, the double 0);
Data update are as follows
with saldata as
(
Select 1 as empid, 5000 as wages, 500 as double pf
Union of all the
Select option 2, 10000,1000 like double pf
)
Select empid, salary saldata
I tried the following query but does not work
updated set of empsalary table (empid, salary) =
(
Select * from)
with saldata as
(
Select 1 as empid, salary, 500 5000 as pf Union double all the
Select option 2, 10000,1000 like double pf
)
Select empid, salary saldata
) sl
where sl.empid = empsalary.empid
)
I use oracle 10g.
Help, please.
Krishna Devi wrote:
Hello
I want to update a table using the selected values.
Cases in the sample:
create table as empsalary)
Select 1 as empid, 0 in the wages of all the double union
Select option 2, the double 0);
Data update are as follows
with saldata as
(
Select 1 as empid, 5000 as wages, 500 as double pf
Union of all the
Select option 2, 10000,1000 like double pf
)
Select empid, salary saldata
I tried the following query but does not work
updated set of empsalary table (empid, salary) =
(
Select * from)
with saldata as
(
Select 1 as empid, salary, 500 5000 as pf Union double all the
Select option 2, 10000,1000 like double pf
)
Select empid, salary saldata
) sl
where sl.empid = empsalary.empid
)
I use oracle 10g.
Help, please.
Thanks for posting creates table and test data.
The error message would have helped because it's pretty obvious that this is the problem:
Update table empsalary
*
ERROR on line 1:
ORA-00903: invalid table name
Just remove the word "table".
-
the advantage of using the date dimension.
Hi gurus,
I have a table of facts with 5 different dates as the shipping date, the order date.
My question is
Case 1. in fact, table I store date_keys (substitution integers) and create several date dimensions (alas) for each type of date as a date_dimension alias for the shipping date, an alias for the date of the order etc (in the model of declaration).
Case 2: I just avoid using the date and just store date dimension (no integers) in fact table.
In this way I will save 5 joints in 5 dimensions of different date.
Note: I have need of these 5 participates in the declaration of model that will be used to create reports, not in the model ETL in dimension model ETL that one date will be there, but we must create on alias the dimension date for each type of report date.
So, I was wondering what is the advantage of using the date dimension?It is more a question of design/statement of a question of ETL/ODI;).
The advantage of the dimension is that you can select or aggregate data by year, weekends, quarter, month, day of the week...
If you only keep the date, you had all these complexes of the formula in logic or calculated items columns (assuming you are using OBIEE). Performances will be affected and the code will be duplicated in many places.Hope that it answers the question.
Jerome
-
differences between the Data Pump to back up the database and use RMAN?
What are the differences between the Data Pump to back up the database and use RMAN? What is DISADVANTAGES and BENEFITS?
Thank youSearch for the backup of the database in
http://docs.Oracle.com/CD/B28359_01/server.111/b28318/backrec.htm#i1007289
In brief
RMAN-> physical backup. (copies of the physical database files)
DataPump-> logical backup. (logical data such as tables, procedures)
Docs for RMAN-
http://docs.Oracle.com/CD/B28359_01/backup.111/b28270/rcmcncpt.htm#
Datapump docs
http://docs.Oracle.com/CD/B19306_01/server.102/b14215/dp_overview.htm
Published by: Sunny kichloo on July 5, 2012 06:55
-
Using the data of EBS in BI Publisher
Question: Is it possible to directly use EBS data in BI Publisher? If so, can you point me to the documentation?
Here's the scenario:
Imagine that our company uses Oracle E-Business Suite. All the columns, tables and diagrams Oracle related to BSE. It is the backbone of our company data. Couldn't live without it.
We're also crazy to OBIEE, especially BI Publisher. We have people who are experts in BI Publisher.
The CEO of the company would like to enjoy the mountains of data available in EBS, but also the expertise of the people who are trained and skilled in BI Publisher, using the data of EBS directly in BI Publisher. The Director-general provided us with a budget and staffing appropriate to perform a single installation of all structures must be added, or middle, for this to happen.
Objective of the Chief Executive Officer is for data additions and revisions in EBS will be included automatically in the reports BI Publisher. Assuming that between Monday and Tuesday, none of the structural changes that occurred in any of the EBS diagrams, tables or columns, assuming that everything we have done is add and/or modify data from BSE, then the BI Publisher reports must reflect the Monday data Monday and, without having to do anything in the meantime, reflect the Tuesday data just sitting Tuesday and running the editor of BI reports Tuesday morning. Not every day to rebuild XML files or something like that. Just clean and totally transparent use of the EBS data it gets added and updated during the normal course of business.
In general, what steps do we need to carry out - in EBS, in the XML Editor (if any) and BI Publisher - to directly use the data of EBS in BI Publisher on a daily basis as described above?
Ideally, I like just go to the Admin page in BI Publisher and add EBS as a new data source, or perhaps to use the section of the integration of the Admin page, as we would with discoverer or workspace of Hyperion and Shared Services. But I know that's not as simple as that.
Can you help clarify?
Thank you!"Ideally, I like just go to the Admin page in BI Publisher and add EBS as a new data source, or perhaps to use the section of the integration of the Admin page, as we would with discoverer or workspace of Hyperion and Shared Services. '" But I know that's not as simple as that. »
I don't know why you don't think it's as simple as that, but it is. Add a new JDBC data source, assign it to BEEP roles, create templates of data/queries against this data source and you're ready to go.
To use the multi-org views, you need to set the org_id in forward initiation of the report.
What version of BEEP are you using? We have the last BEEP 11 g, we use eBS (R12) as the security model, Teradata is our main source of data, but almost all the reports uses the security context for the multi-org eBS to limit the data of Teradata, based on the security profile of the user as defined in eBS.
I hope this helps.
Thank you
Sunder
Maybe you are looking for
-
PC hangs at "preparing to sleep"
HelloUntil yesterday, my computer was working fine, I use Hibernate and standby works widely. Today, I noticed that whenever I try to put my pc in standby or standby extended computer freezes on the screen "preparation of standby/hibernation. I held
-
Example SynchAI-AO does not work
Hello! I use a USB 6621 and I tried to run the code example, ANSIC C SynchAI-AO delivered with the equipment. First of all, I got the following error message: DAQmx error: terminal Source routing is not found on the device Make sure that the name of
-
G4-1016dx: g4-1016dx bios showing as a model in the g7 with a number of different products
I have a Pavilion g4-1016dx computer laptop product LFI73UA SER # [personal information deleted] BUT the bios shows it's like a g7 with number of product LW415UA ser #[personal information deleted] g7-1158nr Pavilion. I tried updating the bios to the
-
Subject pretty much said it all. We received a CPU upgrade for our PowerEdge T320. The new processor comes with a new heat sink, and it seems that there is a thermal buffer of paste on the lower part of the radiator. I wonder if anyone knows if I nee
-
My pc turns itself back after that I stopped it
My pc turns itself back after I stopped him, waiting for about 3 minutes before doing so. If I plug my pc it starts not in place, then, just after I stopped. I have win7home premium, I use bitdefender anti virus, the firefox browser and a dongle to g