exclude the TAO of gathering stat tables
Hello
on the FSCM 9.1, 8.52 tools. on Windows 2008. DB Oracle 11g R2.
Automatic stat collection, collection of statistics for temporary tables PS_XXX_TAO. This distorts the cardinality of plans explain but if I delete the statistics:
DBMS_STATS.delete_table_stats (ownname = > 'SYSADM', tabname = > 'PS_XXX_TAO');
But this isn't permanent and re, statistics are collected during the night.
-How to run a permanent deletion for the tables? I mean delete once, but for a permanent result?
or
-How to exclude these two paintings of automatic collection of stat?
Thank you.
Thank you that I applied:
exec ('ADAM', 'B') dbms_stats.lock_table_stats
Any recommendation of Poeplesoft in documentaries?
Kind regards.
Tags: Oracle Applications
Similar Questions
-
Exclude MV grouping scheme stats table
Hey,.
I'm running the daily statistics gathering the procedure below.
But it runs at the same time with a MV that's refreshing, and it is a failure because of that.
Is it possible to exclude the view materialized since the procedure diagram below stats?
BEGIN
DBMS_STATS.gather_schema_stats (ownname = > 'SCOTT', estimate_percent = > dbms_stats.auto_sample_size, degree = > 2);
END;
This forum works on Oracle 10.2.1.0 on Linux env.You can lock the statistics on tables that you do not want to ask for more stats: dbms_stats.lock_table_stats
http://download.Oracle.com/docs/CD/B19306_01/AppDev.102/b14258/d_stats.htm#i1043993
Then run that the above command will not calculate its stats on these objects.Nicolas.
-
Stats not registered in the stat table use with gather_table_stats
During the collection of statistics with DBMS_STATS. GATHER_TABLE_STATS and passing stattab parameter partitioned table, his stats are not saved for this partition in the table user stat unless it is executed twice. Here's the statement that I'm running. If I add a new partition and try to collect statistics for it and store it in the user table stats, it is not be stored even if his dictionary stats are updated. If I run the second time, it will update the entries in the user table stat.
Start
DBMS_STATS.gather_table_stats ("OWNER",
tabname = > 'table_name ',.
partName = > "P20090824"
estimate_percent = > 2,
method_opt = > 'for all THE COLUMNS of SIZE AUTO. "
stattab = > "DICTSTATTAB"
granularity = > "ALL."
degree = > 8,
Cascade = > true
);
end;
/I used similar options and it worked all the time. Only difference is I used the granularity-online 'PARTITION Maybe you give him a try.
Also try the stat of gathering and export to stat table separately. -
export all excluding the table
Hello!
I am trying to export a schema and to exclude the 2 tables. I'm working on 10g on RHEL5.
I use
expdp schemas = exclude BARRY = TABLE: '('TBL1', 'TBL2') IN' dumpfile = barry.dmp logfile = barry.log
and I get the following errors:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production 64-bit
With the partitioning, Real Application Clusters, OLAP, data mining
and Real Application Testing options
ORA-39001: invalid argument value
ORA-39071: value for EXCLUDE is ill-formed.
ORA-00936: lack of expression
Where I'm going wrong?http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_export.htm#BEHJHGHB
Is this the case?
-
Calculate hours on a timeline, excluding the duplicated time
RDBMS 11.2.0.3
Create the table (date startdate, enddate date) work;
Insert in work values (to_date (December 16, 2015 11:45:30 ',' mm-dd-yyyy hh24:mi:ss'), to_date (16 December 2015 14:30 ',' mm-dd-yyyy hh24:mi:ss')); ))
insert into the work values (to_date (16 December 2015 12:00:30 ',' mm-dd-yyyy hh24:mi:ss'), to_date (16 December 2015 17:30 ',' mm-dd-yyyy hh24:mi:ss'));))
insert into the work values (to_date (16 December 2015 22:45:30 ',' mm-dd-yyyy hh24:mi:ss'), to_date (December 17, 2015 01:15:25 ',' dd-mm-yyyy hh24:mi:ss'));))
insert into the work values (to_date (December 17, 2015 13:45:30 ',' mm-dd-yyyy hh24:mi:ss'), to_date (December 17, 2015 20:30 ',' mm-dd-yyyy hh24:mi:ss'));))
insert into the work values (to_date (December 17, 2015 19:45:30 ',' mm-dd-yyyy hh24:mi:ss'), to_date (18 December 2015 02:30 ',' mm-dd-yyyy hh24:mi:ss'));))
STARTDATE ENDDATE TIME_DIFF 16/12/2015-11:45:30 16/12/2015-14:30 02:44:30 16/12/2015 12:00:30 16/12/2015-17:30 05:29:30 16/12/2015-22:45:30 17/12/2015 01:15:25 02:29:55 17/12/2015-13:45:30 17/12/2015-20:30 06:44:30 17/12/2015-19:45:30 18/12/2015-02:30 06:44:30 I calculate the amount of hours spent on the timeline excluding duplicate time.
The timeline is the range of first STARTDATE ( 16/12/2015-11:45:30) until the last ENDDATE (18/12/2015-02:30) I want to calculate hours excluding duplicated between this interval hours.
For example.
First row:
Of 16/12/2015-11:45:30 up to the 16/12/2015 14:30 I have the diff from 02:44:30
Second row:
Of 16/12/2015 12:00:30 until 16/12/2015-17:30 I have the diff of 05:29:30, but the value of start_time (16/12/2015 12:00:30) is duplicated because was already on the range calculated level 1, so I have to remove it. Then the correct range of second row is of 16/12/2015 14:30 to 17:30 16/12/2015 correct time is 03:00.
I need the make and to end computed in time past on vary from first STARTDATE and ENDDATE last, excluding the duplicated times.
Any help is welcome.
Hello
Thanks for posting the CREATE TABLE and INSERT statements; It's very useful!
Don't forget to post the exact results you want from these data. For example:
GRP_STARTDATE GRP_ENDDATE HOURS
------------------- ------------------- ------
16/12/2015 11:45:30 16/12/2015 17:30 5.7417
2015/12/16 22:45:30 17/12/2015 01:15:25 2.4986
2015/12/17 13:45:30 12/18/2015 02:30 12.742
If this is what you want, here's a way to get it:
WITH got_new_grp AS
(
SELECT startdate, enddate
CASE
WHEN startdate > MAX (enddate) over (ORDER BY startdate, ROWID
ROWS BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING
)
THEN 1
END AS new_grp
Work
-WHERE--if necessary
)
got_grp AS
(
SELECT startdate, enddate
COUNT (new_grp) OVER (ORDER BY startdate) AS grp
OF got_new_grp
)
SELECT MIN (startdate) AS grp_startdate
MAX (enddate) AS grp_enddate
24 * (MAX (enddate)
-MIN (startdate)
), Hours
OF got_grp
GROUP BY grp
ORDER BY grp
;
It is, fundamentally, a problem with GROUP BY. We need the difference between the (enddate) MAX and MIN (startdate) in each group of overlapping lines.
The tricky part is to identify groups of lines that overlap (grp in the above query). We can identify a new group to the beginning (new_grp) by checking if startdate of a line is greater than all the previous lines enddate (where 'previous' means in order by startdate). To identify what group each line belongs to, we can count how many groups have already begun.
-
Hello
Oracle version: 11.1.0.7.0 - 64 bit
I read the documentation online at joins. The page is avialable here: joins at
My question is about the join order of evaluation of the conditions in clause and the conditions of those
are not the join conditions and are placed in the WHERE clause.
Consider the following pseudocode
SELECT
T1. Col1,
T2.Col1
Of
Table1 t1 LEFT OUTER JOIN table2 t2
WE
(condition_expression1)
WHERE
(condition_expression2)
Is it correct to say that if there is no column on the status of join (condition_expression1) in condition_expression2, then condition_expression2 is executed before condition_expression1? In other words, oracle always trying to filter based on the WHERE clause individually each table as much as possible before joining them based on the conditions on the article?
Thanks in advance,
Hello
dariyoosh wrote:
Hello
Oracle version: 11.1.0.7.0 - 64 bit
I read the documentation online at joins. The page is avialable here: joins at
My question is about the join order of evaluation of the conditions in clause and the conditions of those
are not the join conditions and are placed in the WHERE clause.
Consider the following pseudocode
SELECT
T1. Col1,
T2.Col1
Of
Table1 t1 LEFT OUTER JOIN table2 t2
WE
(condition_expression1)
WHERE
(condition_expression2)
Is it correct to say that if there is no column on the status of join (condition_expression1) in condition_expression2, then condition_expression2 is executed before condition_expression1? In other words, oracle always trying to filter based on the WHERE clause individually each table as much as possible before joining them based on the conditions on the article? ...
The reverse is actually closer to the truth, but we can't really make general statements like that.
SQL is not a language of the proceedings. Looking at the code SQL, we could say that the code does, but we cannot say much about how that code it. In other words, SQL is a language that describes the results you get, not the way to get them.
The optimizer will do everything what he thinks is faster if it does not change the results. If any order in which they are applied (in outer joins or CONNECT BY queries, for example), then think of the join is done first, and the value of the WHERE clause is applied to the result of the join.
Here is a query looks very much like you posted:
SELECT d.deptno
e.ename, e.sal
OF scott.dept d
LEFT OUTER JOIN scott.emp e ON e.deptno = d.deptno
WHERE e.sal > = 3000
ORDER BY d.deptno
;
Output:
DEPTNO ENAME SAL
---------- ---------- ----------
10 KING 5000
20 FORD 3000
20 3000 SCOTT
The scott.dept table contains deptnos 30 and 40; Why are they not in the result set? The query behaves as if the outer join is made first (production 15 rows), then the WHERE clause has been applied. All lines with deptno = 30 had sals down han 3000 and all single line with deptno = 40 was NULL in the sal column, then these lines are excluded (as well as other lines of deptnos 10 and 20), and only 3 lines above are left.
-
What is the meaning of this statement.
Of http://docs.oracle.com/cd/E11882_01/server.112/e16638/optimops.htm#autoId34, there is not that I can't understand.
If the path of the inner table is independent of the external table, then the same lines are retrieved for each iteration of the outer loop, significantly reduce the performance of the.
What is the meaning of this statement? You can take an example for me?
Thanks
Lonion>
Of http://docs.oracle.com/cd/E11882_01/server.112/e16638/optimops.htm#autoId34, there is not that I can't understand.If the inner table's access path is independent of the outer table, then the same rows are retrieved for every iteration of the outer loop, degrading performance considerably.
What is the meaning of this statement? You can take an example for me?
>
Can you say: join Cartesian?This quote is from the section explaining the nested loops. and note it gives you a clue:
>
See also:"Cartesian joins.
>
The sentence BEFORE the one you quoted, it is what connects your quote with the MENTION:
>
It is important to ensure that the internal table is driven out of the external table (function).
>
This statement means that the lines of the internal table should DEPEND ON the external table.In a Cartesian join the inner table will depend on the external table at all:
SELECT D.*, E.* FROM DEPT D, EMP E
There is no WHERE clause, so there is nothing saying Oracle tables are related as well. Oracle will perform a Cartesian join and if a nested loop is used then, as says your quote, "the same lines are retrieved for each iteration of the outer loop, performance degradation significantly."
All Oracle ranks visits to query external table lines will now be the inner table. But because there is no WHERE clause is no available information to EXCLUDE lines from the internal table "the same lines are extracted" (ALL) "for each iteration of the outer loop.
Here is the same query above using the USE_NL hint to force Oracle to use a nested loop
SQL> select /*+ use_nl (d e) */ d.*, e.* from dept d, emp e; 56 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 4192419542 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 56 | 3248 | 10 (0)| 00:00:01 | | 1 | NESTED LOOPS | | 56 | 3248 | 10 (0)| 00:00:01 | | 2 | TABLE ACCESS FULL| DEPT | 4 | 80 | 3 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL| EMP | 14 | 532 | 2 (0)| 00:00:01 | --------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 42 consistent gets 0 physical reads 0 redo size 3897 bytes sent via SQL*Net to client 452 bytes received via SQL*Net from client 5 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 56 rows processed SQL>
-
How to exclude the each row of the result set XML declaration?
Hello
I have a table with an XMLTYPE column and would like to SELECT a set of rows. How can I exclude the each row of the result set XML declaration? My query currently looks like this, I am running through Spring JDBC:
SELECT XMLSerialize FROM t1 WHERE XMLEXISTS('$e/Event' PASSING XMLTEXT AS "e") ORDER BY t1.time DESC myschema.event (HAPPY t1.xmltext)
After selecting, in my application I convert each line in a string and concatenate all the rows in a large chain to analyze in a DOM model. I get a parser exception (org.xml.sax.SAXParseException: the target of the processing instruction corresponding to "[xX] [mM] [he's]" is not allowed) because there are several XML statements in my large chain. Of course, I could manually check the string on each line if it starts with the XML declaration, but it would be nicer if I could load the DB does not add it in the first place. Is there a way?
Thank you!
-DanielaHello
Some options that I can think of:
SELECT XMLSerialize(CONTENT XMLtransform(t1.xmltext, xmltype('
or quite simply,.
SELECT XMLSerialize(CONTENT extract(t1.xmltext,'/') ) FROM myschema.event t1 WHERE XMLEXISTS('$e/Event' PASSING XMLTEXT AS "e") ORDER BY t1.time DESC ;
-
Cannot add the partition to an existing table.
Hello
I don't add the partition to an existing table that is not partitioned, get the error as a type of data not valid, then I'm not find syntax errors.
ALTER TABLE MESSAGEX_XCHANGE
ADD THE PARTITION OF RANGE (LAST_MODIFY_TIMESTAMP)
(
PARTITION old_data VALUES LESS THAN (To_TIMESTAMP('27/02/2014','DD/MM/YYYY')),
You see for VALUES LESS THAN (To_TIMESTAMP('28/02/2014','DD/MM/YYYY')) of the PARTITION
);
Error report:
SQL error: ORA-00902: invalid data type
- 00000 - "invalid data type".
* Cause:
* Action:
Thank you
Manon...
You get this error because your edit statement has an invalid syntax. In addition, you cannot partition a table that is not already configured for partitioning!
You have to physically re-create the partitioned table in order to add new partitions to it.
-
Hello people,
I currently have a working for this using PL/SQL solution, but it would be nice to have it using SQL. Any help is appreciated and once again, to your practice time.
I'm looking to pick up the most recent date and the corresponding user that updated registration for a student in particular. There are two tables T1 and T2. The most recent date can be the create_date or modified_date of T1 or T2.
Scripts for creating the table and INSERT statements:
create table T1 ( code varchar2(4), create_date date, create_userid varchar2(20), modified_date date, modify_userid varchar2(20)); create table T2 ( code varchar2(4), visit_id number, visit_date date, create_date date, create_userid varchar2(20), modified_date date, modify_userid varchar2(20));
insert into T1 values ('1001',to_date('06-FEB-2013 09:12:12','DD-MON-YYYY HH24:Mi:SS'),'ROGER',to_date('12-APR-2013 13:01:12','DD-MON-YYYY HH24:Mi:SS'),'BRIAN'); insert into T2 values ('1001',1,to_date('10-JAN-2013','DD-MON-YYYY'), to_date('10-JAN-2013 14:12:12','DD-MON-YYYY HH24:Mi:SS'),'ROGER',to_date('12-MAR-2013 12:01:06','DD-MON-YYYY HH24:Mi:SS'),'AMY'); insert into T2 values ('1001',2,to_date('31-JAN-2013','DD-MON-YYYY'), to_date('12-MAY-2013 16:11:12','DD-MON-YYYY HH24:Mi:SS'),'GRACIE',null,null); insert into T1 values ('1002',to_date('12-JAN-2013 11:12:13','DD-MON-YYYY HH24:Mi:SS'),'LYNNELLE',to_date('12-APR-2013 13:01:12','DD-MON-YYYY HH24:Mi:SS'),'BRIAN'); insert into T2 values ('1002',1,to_date('10-JAN-2012','DD-MON-YYYY'), to_date('10-JAN-2012 09:12:12','DD-MON-YYYY HH24:Mi:SS'),'ROGER',to_date('12-APR-2013 13:04:12','DD-MON-YYYY HH24:Mi:SS'),'AMY'); insert into T2 values ('1002',2,to_date('10-JAN-2013','DD-MON-YYYY'), to_date('12-JAN-2013 11:12:13','DD-MON-YYYY HH24:Mi:SS'),'JOHN',null,null); insert into T1 values ('1003', to_date('04-FEB-2014 12:01:01', 'DD-MON-YYYY HH24:Mi:SS'), 'LYNNELLE', null, null);
I want to show for the three codes are the following records:
Code Table Date User ID 1001 T2 12-MAY-2013 16:11:12 GRACIE 1002 T2 12-APR-2013 13:04:12 AMY 1003 T1 04-FEB-2014 12:01:01 LYNNELLE
1001 students, the most recent is the create_date of the visit count = 2 for the Code 1002, the most recent date comes from modified_date for visit 1 (its 3 seconds later than the T1 modified_date). Finally, for students 1003 (who did not all records in T2, the create_date is the only date and must be picked up.
Thanks in advance.
with t as)
Select the code,
NVL (MODIFIED_DATE, create_date) dt.
case nvl (modified_date, create_date)
When modified_date then modify_userid
of other create_userid
end userid,
Tbl "T1".
from t1
Union of all the
Select the code,
NVL (MODIFIED_DATE, create_date) dt.
case nvl (modified_date, create_date)
When modified_date then modify_userid
of other create_userid
end userid,
Tbl 'T2 '.
the t2
)
Select the code,
Max (TBL) keep (dense_rank last order by dt, tbl) tbl.
Max (DT) dt,
Max (UserID) keep (dense_rank last order by dt, tbl) userid
t
Code group
order by code
/
CODE TO DT USERID
---- -- -------------------- --------------------
1001 T2 MAY 12, 2013 16:11:12 GRACIE
1002 T2 12 APRIL 2013 13:04:12 AMY
1003 T1 4 FEBRUARY 2014 12:01:01 MANONSQL >
SY.
-
Dynamic action on the fields of forms in table form
Hello guys.
That I can see on the form fields in a table, I can't perform dynamic actions. What I want is to have an element of the selection list when in the change event, it changes the values for all the records in the report. For example if I have 40 records in this tabular form and I change the value of 'Open' to ' near the field called State, I want to see this change (status value of 'Open' close ') all rows in this tabular presentation. Is this sensible?
Thank you very much, Bernardo.
Hello Bernardo.
There are several ways to accomplish what you want.
-You can create a PL/SQL procedure to manage an update on all the rows according to the value of the value of your column. I wrote once a blog that can help you with that:
http://vincentdeelen.blogspot.nl/2013/06/custom-multi-row-processing-using.html
-You can create a dynamic action with javascript or jQuery to handle the change event.
The first option is more secure because it is managed by the database, the second is more simple and example adjustable all the entire column for all rows displayed, without the need to refresh your tabular presentation, or your entire page. For course work, you should however have little validation at the end of the database. I also think that it is not possible to set the values of the rows that are not displayed, which would again require some PL/SQL for handling.
If you need help setting up the dynamic action, please set up an example on apex.oracle.com.
Kind regards
Vincent
-
Insert the problem using a SELECT table with an index by TRUNC function
I came across this problem when you try to insert a select query, select returns the correct results, but when you try to insert the results into a table, the results are different. I found a work around by forcing a selection order, but surely this is a bug in Oracle as how the value of select statements may differ from the insert?
Platform: Windows Server 2008 R2
11.2.3 Oracle Enterprise Edition
(I've not tried to reproduce this on other versions)
Here are the scripts to create the two tables and the data source:
Now, execute the select statement:CREATE TABLE source_data ( ID NUMBER(2), COUNT_DATE DATE ); CREATE INDEX IN_SOURCE_DATA ON SOURCE_DATA (TRUNC(count_date, 'MM')); INSERT INTO source_data VALUES (1, TO_DATE('20120101', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120102', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120103', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120201', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120202', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120203', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120301', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120302', 'YYYYMMDD')); INSERT INTO source_data VALUES (1, TO_DATE('20120303', 'YYYYMMDD')); CREATE TABLE result_data ( ID NUMBER(2), COUNT_DATE DATE );
You should get the following:SELECT id, TRUNC(count_date, 'MM') FROM source_data GROUP BY id, TRUNC(count_date, 'MM')
Now insert in the table of results:1 2012/02/01 1 2012/03/01 1 2012/01/01
Select the table, and you get:INSERT INTO result_data SELECT id, TRUNC(count_date, 'MM') FROM source_data GROUP BY id, TRUNC(count_date, 'MM');
The most recent month is repeated for each line.1 2012/03/01 1 2012/03/01 1 2012/03/01
Truncate your table and insert the following statement and results should now be correct:
If someone has encountered this problem before, could you please let me know, I don't see what I make a mistake because the selection results are correct, they should not be different from what is being inserted.INSERT INTO result_data SELECT id, TRUNC(count_date, 'MM') FROM source_data GROUP BY id, TRUNC(count_date, 'MM') ORDER BY 1, 2;
Published by: user11285442 on May 13, 2013 05:16
Published by: user11285442 on May 13, 2013 06:15Most likely a bug in 11.2.0.3. I can reproduce on Red Hat Linux and AIX.
You can perform a search on MOS to see if this is a known bug (very likely), if not then you have a pretty simple test box to open a SR with.
John
-
I see that this error was mailed here, but can't seem to find a position with a resolution.
We use Lab Manager 4.0.4 and just upgraded the hosts in our laboratory. Before updating to ESXi 4.1 ESX 4.0 u1, we cancelled all VMs in all configurations. There is none who were suspended, they were all turned off. We have improved vcenter 4.0 to 4.1 as well. We moved from a dell poweredge 1950 with two quad core intel L5410 to a r610 with two quad core intel processor E5506.
When I try to turn on some of the lab configurations (our VM models all work fine), I get this error:
- Cannot use 'lab1' host because the host CPU is not compatible with the judgment of the virtual machine suspend state.
I threw the State for the lab configuration (even if it was turned off and cancelled) and still get this message. I guess the processor architecture of these chips is quite similar, and both are Intel.
If I go into this directory of Manager of laboratory inside the data store for one of these virtual machines that does not light and add to the inventory, he turned and starts fine on my server esxi 4.1. So how do Lab Manager to get account it's well deploy and start it?
Hey billk.
Although not supported completely, you can solve it by going to the SQL of Lab Manager database. Make sure that you back up your database before you do anything like that.
Open the table "fsdir" and corresponds to the dir_id with the id Lab Manager VM. Once you find the relevant line, change suspend_proctype_id to null (Zero-Ctrl). You can do this while Lab Manager is still running.
The results can be unstable (i.e. Windows can crash if it was a drastic change of CPU), but at least you look at a hard reset. The bat reset certainly not being able to return to the previous state.
Also note that while Lab Manager does not include CVS, if you activated, your virtual machines still work in CVS mode. If you have a mixture of heterogenious of hosts in a cluster for Lab Manager, you can see this question pop up a lot. There is absolutely nothing wrong with take back the virtual machine because of the VCA, but Lab Manager thinks otherwise and prevents it.
-
Excluding the chapters of automatic chapter numbering
Hi guys, first time poster!
I need help.
I've set up each of my chapters with automatic chapter headings (Type > text Variables > insert Variable > chapter number)
So that part works fine.
The problem I have is (as you can see in my screenshot), for my books, Chapter 1 begins in the 7th entry on the book panel. So, instead, Chapter 1 to 7, the chapter number. I'm doing this right? I can't find a solution. Must file as a table of contents and pages copyright etc is not in the compilation of the book? I want to export an entire PDF (what I know how to do) rather than having to connect a PDF file. I checked the "Document numbering Options" and under chapter numbering I don't see really an option to exclude the first entries of 4 or 5 and do the 7th in the list "Chapter 1".Any ideas?
Thank you very much!
The lower part of this Panel is for the chapters of the Document
You can see you have configured automatic chapter numbers
For Chapter 1, you should have
You must choose whether to start chapter numbering in chapter: 1
Then in your previous documents before
You must configure them so that they start numbering in Chapter 1 of too - but use a different numbering style as A, B, C, etc..
Or anything that works for you.
Bottom line is you need to select first chapter in the book and start the numbering of Chapter 1.
-
Ask about the creation and filling I$ table on different condition
Hello
I have a question about the creation and filling I$ table on a different condition. In which condition the I$ table creation? And these conditions are given below:
(1) * source and transit area * are on the same server (that is to say target is located on another server)
(2) * gathering and target area * are on the same server (IE source is on another server)
(3) * source, transit area and target * are * different 3 * Server
Source 4), area transit and target are on the same server
Thank youI'm not quite clear to your question. Always try my best to erase it.
In your all over requirement I$ table will be created.
If the same staging as target (a database, a user) then all temporary tables are created in this user
If the scaffolding is different from the target (a database, two users (A, B)), then all temporary tables will be created under that user A (lets consider) and the data will be inserted into the target table that is present to user BStaging is different from the target (two database, two users (A1, A2), architecture not recommended) if all temporary tables will be created under that user A1 (A1 of the databases) and the data will be inserted into the target table which is present in user A2 (A2 data base)
If the source, staging, the target will be under a database then no. LKM is required, IKM is sufficient to load the data into the target. Especially for this, you can see an example given by Craig.
http://S3.amazonaws.com/ora/ODI-Simple_SELECT_and_INSERT-interface.swfThank you.
Maybe you are looking for
-
Turn off reminders to 9.3.2
I installed 9.3.2 on iPad 2 Air. E-mails have disappeared and the battery runs out. I don't want to chance it with the iPhone 6. How can I disable the reminders of the optimist on the iphone?
-
Ask questions about the HP Pavilion 14-V041TX Bluetooth
Hello, sry for asking questions about simple things. How to activate bluetooth for laptop HP Pavlion 14-V041TX? Or how to bluetooth to detect other bluetooth devices? coz already try to use my smartphone, but cannot detect any bluetooth device or lap
-
How to get themes & move text on your image
ROMAN RELOADED
-
My crazy laptop or I have a virus?
My computer vista laptop maintain low loading 100 s gif, jpeg, PNG images in the folder changed recently when I Browse. Why is this happening?
-
Upgrade Windows reserved 10 cannot display the Confirmation of
I can see Windows reserved 10 upgrade is watch display the confirmation in blue. There appears to be a hyperlink. I click on view confirmation and nothing comes... I m running Windows 7 Home Premium 64-bit with SP1. Why I don't get the upgrade wind