Flashback data archive commit the performance of time - a bug problem?
Hi allI use the Oracle 11 g R2 on 64-bit windows environment. I just want to do some tests quue flashback data archive. I created one and add to a table. about 1.8 million records is to exist in this table. Furthermore, this table is one of the examples of oracle, SH. SALEStables. I created another table using the table and insert the same data twice.
-- not a SH session
Create Table Sales as select * from sh.sales;
insert into sales select * from sh.sales;
Commit;
insert operation takes a few seconds. sometimes, in this code, validation command takes more of * 20 *, sometimes 0 seconds. If validation time is brief after insert, can I update the table and then validate again:update sales set prod_id = prod_id; -- update with same data
commit;
update takes a few seconds longer. If the first commit (after integration) has had too little time, second validation, after the update, takes more than 20 minutes. At this time, while that commit were working, my cpu becomes overloaded, 100% charge.the system that oracle runs on is good for quest staff use, i7 4 real core cpu, 8 GB ram, disk SSD etc.
When I looked at the Business Manager - performance monitoring, I saw this SQL in my sql album list:
insert /*+ append */ into SYS_MFBA_NHIST_74847 select /*+ leading(r)
use_nl(v) PARALLEL(r,DEFAULT) PARALLEL(v,DEFAULT) */ v.ROWID "RID",
v.VERSIONS_STARTSCN "STARTSCN", v.VERSIONS_ENDSCN "ENDSCN",
v.VERSIONS_XID "XID" ,v.VERSIONS_OPERATION "OPERATION", v.PROD_ID
"PROD_ID", v.CUST_ID "CUST_ID", v.TIME_ID "TIME_ID", v.CHANNEL_ID
"CHANNEL_ID", v.PROMO_ID "PROMO_ID", v.QUANTITY_SOLD
"QUANTITY_SOLD", v.AMOUNT_SOLD "AMOUNT_SOLD" from SYS_MFBA_NROW r,
SYS.SALES versions between SCN :1 and MAXVALUE v where v.ROWID =
r.rid
This consumes my resources for more than 20 minutes. what I do is, just an update 1.8 milion records (which use update is really takes little time) and validation (which kills my system).What is the reason for this?
Info:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
I see that in the example of Guy Harrison the SYS_MFBA_NROW table contains a very large number of lines - and the query is forced into a loop of neste join on this table (as is your query). In your case the nested loop is a vision (and there is no sign of a current "pushed" predicate.
If you have a very large number of rows in the table, the resulting code is bound to be slow. To check, I suggest your run the test again (from scratch) with active sql_trace or statistics_level defined at all so that you can get the rowsource for the query execution statistics and check where the time is being spent and where the volume is displayed. You will have to call this one in Oracle - if this observation is correct that FBDA is only for OLTP systems, no DSS or DW.
Concerning
Jonathan Lewis
Tags: Database
Similar Questions
-
Hi all
There is this issue on the web:
Q: identify the statement that is true about Flashback Data Archive:
A. you can use multipletablespaces for an archive, and each archive can have its own maintenance time.
B. you can have an archive, and to eachtablespace which is part of archive, you can specify a different retention period.
C. you can use multipletablespaces for an archive, and you can have more than one default value archive by retention period.
D. Si you specify an archive by default, it must exist in onetablespace only.
Everyone says that the correct answer is b. isn't it supposed to be a?
Specified retention period to the archive flashback level not tablespace, then B is incorrect.
-
Hello
I heard about flaskback data Archives.
can I get some short examples to understand in practical ways
Thank youPlease put your comments. I got the answer from your post
I'm confused what do you want to get from us when you have the answer but... :)
I think above doc link will answer your questions rest/future about FDA (flashback data archive)
Concerning
Girish Sharma -
Made with multi date columns to the dimension of time
Hello world
First let see my script:
I have the control panel with these columns
order_no (int),
create_date (date),
approve_date (date)
close_Date (date)
----
I have a time hierarchy dimension: time
I want to see at the time of the hierarchy, how many orders created, approved and closed for each time part(year,quarter,month,..) as below:
Date | created | approved | closed
————————————————------------------------
2007-1st quarter | 50. 40 | 30
2007 - q2 | 60. 20 | 10
Q3-2007 | 10. 14 | 11
2007-4th quarter | 67. 28 | 22
Q1-2008 | 20. 13 | 8
Q2-2008 | 55. 25 | 20
Q3-2008 | 75. 35 | 20
Q4-2008 | 90. 20 | 2
…
My solution:
Physical layer;
1. I create an order f alias as the fact table for the roll of the order
2 i joined f-ordered with d-time (time alias) on f_order.created_Date = d - time.day_id
3 - I added 2 logical column with the formula of the measures:
Sum aggregation is_approved(If approve_date= THEN 0 else 1)
Sum aggregation is_closed(If closed_date= THEN 0 else 1)
order_no (will use to measure aggregation created County
When I create the report in analytics in the generated query he used created_date and this isn't what I expected!
What is the best solution?
1-
If I have to create 3 is of the order: f_order_created, f_order_approved, f_order_closed and join each other on these columns d-time?
f_order_created.created_Date = d - time.day_id
f_order_approved.approved_Date = d - time.day_id
f_order_closed.closed_Date = d - time.day_id
2-do you create my fine measure?
Hello anonymous user,.
The approach with three done alias that you then use in three separate sources of logical table for your logical fact is the right one. This way, you keep canonical once and did do the role play.
So you won't need to 3 facts in the layer of logic that you ask above, but only 3 LTS. Physically, you need 3 alias joined the dimension of time with the join specifications you mention of course.
PS: Jeff wrote about this a few years if you want to take a look.
-
Unlimited Conservation on the flashback data archive
11.2.0.4
The FDA is created with a retention time, in days, months or years.
https://docs.Oracle.com/CD/E11882_01/server.112/e41084/statements_5010.htm#BABIFIBJ
Is there a setting of retention that make conservation unlimited/forever? Or do you put something like 100 years.
Yes
-
I'm basically performing team. Working in porting gecko 1.2. We did successfully porting. But we believe that it is not as big as gecko 1.1 performance. If we want any tool/app to measure performance.
(1) application of loading time
(2) time of image rendering
(3) audio integrated
(4) key response time
(5) browser, loading time
etc.Hello
Thank you for your interest in Firefox OS. In the application settings click the device information > more information > developer. You will find tools for dev for load time, frames per second, etc.
Best regards
Michelle Luna -
Problem activation of flashback is archive!
Hello
When I try to create an archive of flashback and try to run this code:
Conn System/Manager have sysdba
create flashbacks archive test_archive
example of tablespace
1 M QUOTA
CONSERVATION 1 DAY;
I get:
ERROR on line 1:
ORA-00439: feature not enabled: Flashback Data Archive
and when I try turn it on by running this code:
ALTER TABLE table_name FLASHBACK is ARCHIVE;
I get:
ERROR on line 1:
ORA-00439: feature not enabled: Flashback Data Archive.
M.LCheck this query:
SQL > SELECT * FROM V$ OPTION where the parameter as 'Flash % ';If it's WRONG, you don't have a FLASHBACK. Probably as he said, you must change to Oracle Standard Edition license Oracle Enterprise Edition.
-
Any way to change/replace the EXIF data such as the date of the photo?
I import into Lightroom several thousand photos I took with my first digital camera, in 2000-2004. Everything went very smoothly. But here's my problem:
In my old workflow, I spent the time of photo montage liquidated as TIF files, and stripped of any program that I used to edit the EXIF from them. Thus, for example, a photo that was taken on March 4, 2001 has the information EXIF showing it was taken on March 10, 2001, the date on which I edited and made the TIF. All other EXIF data is stripped as well, the data on exposure, lens information, ISO, if the flash is triggered, etc.. Basically, the TIF file seems to have no other than my name and the size of the file EXIF data.
I can't live without the other info, really, but the change of information on the initial date/time means that the TIF file is not sort next to the original file in Lightroom, which I want to do. There are not thousands of photos that I have hair, a few hundred maximum.
Is it possible with a plugin or an external program to copy EXIF information in the original program and insert it in the TIF file?
To change the date to use the Edit capture time option located under library > metadata menu.
-
install esxi 4.1 and the data store on the same server
Hello
I want to install esxi 4.1 and the data store on the same server.
My problem is that I can't make partitions to really separate them and I would reinstall esxi without wiping the data store.
Y at - it another way to put a record out of the raid only to install esxi.
I also do not install on a USB key.
Thank you
AZEL says:
Hello
I want to install esxi 4.1 and the data store on the same server.
My problem is that I can't make partitions to really separate them and I would reinstall esxi without wiping the data store.
Y at - it another way to put a record out of the raid only to install esxi.
I also do not install on a USB key.
Thank you
AZEL,
Can you give us more details about your current environment? What is the size and the data store space used? Do you have any storage of additional network attached to the host (for backup purposes)?
My assumptions of your post do you have 1 stand-alone host with ESXi 4.1 aready installed and you also have a local data store on the same host, but you want to re - install ESXi 4.1 while keeping the contents fo the data store. Is this correct?
-
The performance of the processor and memory performance by the Date and time range
Hi Expert,
I'm not that good in script and even learn how to write one, but for the moment that I need help get script that could generate as below output.
Example query:
Retrieve logs performance for vmname1, vmname2, vmname3 from 07:00 to 10:00 on 25/05/2012.
CPU MHZ, and MEM in MB.
VMName Date Time MemMax MemAvg MemMin CPUMax CPUAvg CPUMin vmname1 25/05/2012 06:00 55,49 48.30 44,55 18.34 10.56 7.43 vmname2 25/05/2012 07:00 52,82 47,81 44,35 28.14 9.80 4.23 vmname3 25/05/2012 08:00 48.62 46.04 42.19 22.51 12.92 8.80 vmname4 25/05/2012 09:00 49.11 47.66 41,45 15.82 8 h 45 4.36 I appreciate really all advice in this area. Thank you.
You can try something like this
$vms = Get-VM vmname1,vmname2,vmname3 $metrics = "cpu.usagemhz.average","mem.usage.average","cpu.usagemhz.minimum", "mem.usage.minimum","cpu.usagemhz.maximum","mem.usage.maximum" $start = Get-Date -Hour 7 -Minute 0 -Second 0 -Day 25 -Month 5 -Year 2012$finish = Get-Date -Hour 10 -Minute 0 -Second 0 -Day 25 -Month 5 -Year 2012 Get-Stat -Entity $vms -Stat $metrics -Start $start -Finish $finish |Group-Object -Property EntityId,Timestamp | %{ New-Object PSObject -Property @{ VMName = $_.Group[0].Entity.Name Date = $_.Group[0].Timestamp.ToLongDateString() Time = $_.Group[0].Timestamp.ToLongTimeString() MemMax = $_.Group | where {$_.MetricId -eq "mem.usage.maximum"} | Select -ExpandProperty Value MemAvg = $_.Group | where {$_.MetricId -eq "mem.usage.average"} | Select -ExpandProperty Value MemMin = $_.Group | where {$_.MetricId -eq "mem.usage.minimum"} | Select -ExpandProperty Value CpuMax = $_.Group | where {$_.MetricId -eq "cpu.usagemhz.maximum"} | Select -ExpandProperty Value CpuMin = $_.Group | where {$_.MetricId -eq "cpu.usagemhz.minimum"} | Select -ExpandProperty Value CpuAvg = $_.Group | where {$_.MetricId -eq "cpu.usagemhz.average"} | Select -ExpandProperty Value } }
Note that this will depend on the statistical level that you set for the chosen time interval, if the measures minimum and maximum will be available.
-
The performance of the occupations data download
Hello..
My download folder and xml app in this xml file, I have 12 images of form url... the size of each image is the 22KO...
So when I read the xml I create the bitmap from the url and display in a table.
The app works very well. but I test wifi network and edge, and the app is very slow to download images in the dashboard.
My guy is, speed up its posible? I have all processes in the same theard, its posible to load images and put a table in another thread? asincronized?
Thank you!
You cannot improve the performance of Thread. By having two of them, all you want to do is call more connection time. If you have a single Thread, then it is time when your son's treatment of the downloaded data and during this time, the network is not be used. If you have another Thread, then while one Thread processes its data downloaded, the other Thread can be by downloading the following image.
I'm not aware of an example.
Perhaps the easiest thing is to simply run another copy of the Thread, but it starts with another list of files to download. Or more generally, have a shared queue and then each Thread can draw the next file to download off the coast of the queue until the queue is empty,
-
Hello
Oracle version: 12.1.0.1.0 - 64 bit
OS: Fedora Core 17 X86_64
My question is about the conservation of fields (hour, minute, second) time DATE type in the array on NLS_DATE_FORMAT changes.
Take the following test case:
SQL> create table tmptab(dateval date); Table created. SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss'; Session altered. SQL> insert into tmptab(dateval) values('2014-01-01 00:00:00'); 1 row created. SQL> insert into tmptab(dateval) values('2014-01-01 10:00:00'); 1 row created. SQL> commit; Commit complete. SQL> select * from tmptab; DATEVAL ------------------- 2014-01-01 00:00:00 2014-01-01 10:00:00 SQL> alter session set nls_date_format = 'yyyy'; Session altered. SQL> select * from tmptab where dateval > '2014'; no rows selected SQL>
I don't understand why it returns nothing. The second test case above insert statement inserted a line with 10 as the value for the time of the DATE field column dateval.
Accordingly, while comparing this with the literal '2014' (which based on the new value of NLS_DATE_FORMAT = "yyyy" is implicitly converted to DATE), shouldn't the above query returns the line 2014-01-01 10:00 ?
I mean, I changed the NLS_DATE_FORMAT but data from time in the table fields are preserved and that's why they should normally be taken into account in the comparison of date.
What I'm trying to say is that for me (Please correct me if I'm wrong), no matter what NLS_DATE_FORMAT configuration is the following test
SQL> select * from tmptab where dateval > '2014';
is the same thing that
SQL> select * from tmptab where dateval > to_date('2014-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss');
And because the line 2014-01-01 10: 00:00 in the tmptab table. The following test
2014-01-01 10:00:00 > to_date('2014-01-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss')
evolves normally true (beucase of TIME = 10 on the left side of the test) and therefore this line must be returned which is not the case in the test above.
You kindly could you tell me what I misunderstood?
Thanks in advance,
This is the price for the use of implicit conversions. Implicit DATE conversion rules are not as direct as it can be assumed. In your case, all you provide is year as date format. In this case date implicit conversion rules assumes that month in the current month, day 1 and time as 00:00:00.
SQL > alter session set nls_date_format = "yyyy";
Modified session.
SQL > select to_char (to_date ('2014 "), ' mm/dd/yyyy hh24:mi:ss') twice;
TO_CHAR (TO_DATE('20)
-------------------
01/08/2014-00:00:00SQL >
So, when you start:
Select * from tmptab where dateval > '2014 '.
Oracle implicitly converts date using "YYYY", which translates as August 1, 2014 '2014'. That's why your quesry returns no rows.
SY.
-
How autoextend affects the performance of a large data load
I do a little reorganisation on a data warehouse, and I need to move almost 5 to a value of tables and rebuild their indexes. I create a tablespace por each month, using BIGFILE tablespaces and assigning to their 600GO, which is the approximate size of the tables for each month. Just space allocation process takes a long time, and I decided to try a different approach and replace the AUTOEXTEND ON NEXT 512 M data file, and then run the ALTER TABLE MOVE command to move the tables. Is the Oracle 11 g Release 2 database, and it uses ASM. I was wondering what would be the best approach between these two:
1. create the tablespace with AUTOEXTEND OFF, set it to 600GO and then run the command ALTER TABLE MOVE. The space would be enough for all the tables.
2. create the tablespace with AUTOEXTEND ON and without affecting more than 1 GB, run the command ALTER TABLE MOVE. The diskgroup has enough space for the expected size of the storage space.
With the first approach my database takes 10 minutes, moving from each partition (there is one for each day of the month). Would that number be affected dramatically if the database must be an AUTOEXTEND each 512 MB?If you measure the performance as the time required to assign the initial data to 600GO file more time to load and compare this to allocate a small file and to load, leaving the autoextend data file, it is unlikely that you will see a noticeable difference. You will get a variation much more just to move 600 GB you will lose waiting on the database to expand. If there is a difference, affect the whole of the file at the front will be slightly more efficient.
More likely, though, is that you don't count the time required to assign the initial 600GO data file since it is something that can be done long in advance. If you do not count this year, then distribute the entire file at the front will be much more effective.
If you need less than 600 GB, on the other hand, allocating the entire file at once can lose very little space. If it is a matter of concern, it may be wise to compromise and to allocate a 500 GB file initially (assuming it's a lower limit of reasonable size you actually need) and let the file into pieces of 1 GB. This is not the most effective approach and you can waste up to a GB of space, but maybe it's a reasonable compromise.
Justin
-
my mac pro is backup in the capsule of time even when I was at work, which means that the time capsule is consumed my data plan. Can anyone suggest a way I can have it the backup only when my mac pro and time capsule is in the same local wifi
If the Time Capsule and MacBook Pros are not on the same network, the MacBook is not backup in the time Capsule. You probably see what snapshots leaving MacBook on the local disk, until the two are reconnected. If you don't want that to happen, disable Time Machine on a different network.
Good day.
-
My incoming emails stopped displaying the date, it displays the time only.
Also, my sent emails do not show the date. Just the time is displayed.
If the e-mail received or sent has been done today the time will be displayed.
E-mail messages that are older than today ' hui will indicate the date and time.
Maybe you are looking for
-
Add an entry in my music to an existing list of on-the-go
just bought a song on Itunes store. He appears in 'my music', but I can't move it to an existing list of on-the-go. Any thoughts?
-
I use FF8 with Selenium RC test infrastructure. When I installed the last update of FF contextual dialog YesNo "You want to contribute to improve..." ' began to appear before each test is run. Thus, the test stopped running on this popup and my probl
-
Satellite L500-1XC - why my keyboard is acting funny?
I have a Satellite L500-1XC. I reinstall my OS (Windows 7 64-bit) using HARD drive recovery option. But after I did it I have problems with my keyboard.When I turn on my laptop and start windows, it just automatically starts typing random continuous
-
I have an error code need to reset the password of the bios 59583211
-
Recently, I brought a Xbox 360, now I'm trying to get my pc to configure my Xbox. Is the residential group the right program to use for such a thing? If so my pc gives me a run around telling me my homegroup is not available because my computer is no