Newbie sorry data-load question and datafile / viral measure
Hi guysSorry disturbing you - but I did a lot of reading and am still confused.
I was asked to create a new tablespace:
create tablespace xyz datafile 'oradata/corpdata/xyz.dbf' size 2048M extent management local size unique 1023M;
alter tablespace xyz add datafile ' / oradata/corpdata/xyz.dbf' size 2048M;
Despite being worried not given information about the data to load or why the tablespace must be sized that way - I was told to just 'do it '.
Someone tried to load data - and there was a message in the alerts log.
ORA-1652: unable to extend temp by 65472 segment in tablespace xyz
We do not use autoextend on data files even if the person loading the data would be so (they are new on the environment).
The database is on a cold backup nightly routine - we are in a rock anvil - we have no space on the server - to make RMAN and only 10 G left on the Strip for (Veritas) backup routine and thus control space with no autoextend management.
As far as I know of the above error message is that the storage space is not large enough to hold the load data - but I was told by the person who imports the data they have it correctly dimensioned and it something I did when the database create order (although I have cut and pasted from their instructions - and I adapted to our environment - Windows 2003 SP2 but 32 bits).
The person called to say I had messed up their data loading and was about to make me their manager for failing to do my job - and they did and my line manager said that I failed to correctly create the tablespace.
When this person was asked to create the tablespace I asked why they thought that extensions should be 1023M and said it was a large data load that must be inserted to a certain extent.
That sounds good... but I'm confused.
1023M is very much - this means that you have only four extents in the tablespace until it reaches capacity.
It is a load - is GIS data - I have not participated in the previous data loads GIS - other than monitor and change of tablespaces to support - and previous people have size it right - and I've never had no return. Guess I'm a bit lazy - just did as they asked.
However, they never used 128K as a size measure never 1023M.
Can I ask is 1023 M normal for large data loads - or I'm just the question - it seems excessive unless you really just a table and an index of 1023M?
Thanks for any idea or other research.
Assuming a block size of 8 KB, 65472 would be 511 MB. However, as it is a GIS database, my guess is that the database block size itself has been set to 16K, then 65472 is 1023MB.
What load data is done? Oracle Export dump? Which includes a CREATE INDEX statement?
Export-Import is a CREATE TABLE and INSERT so that you would get an ORA-1652 on it. So you get ORA-1652 if the array is created.
However, you will get an ORA-1652 on an INDEX to CREATE the target segment (ie the Index) for this operation is initially created as a 'temporary' segment until the Index build is complete when it switches to be a 'temporary' to be a segment of "index".
Also, if parallelism is used, each parallel operation would attempt to assign degrees of 1023 MB. Therefore, even if the final index target should have been only, say 512 MB, a CREATE INDEX with a DEGREE of 4 would begin with 4 extensions of 1023 MB each and would not decrease to less than that!
A measure of 1023 MB size is, in my opinion, very bad. My guess is that they came up with an estimate of the size of the table and thought that the table should be inserted in 1 measure and, therefore, specified 1023 MB in the script that is provided to you. And it is wrong.
Same Oracle AUTOALLOCATE goes only up to 64 MB extended when a Segment reached the mark of 1 GB.
Tags: Database
Similar Questions
-
Newbie has a quick question and a follow-up
I want to install Backup Exec and we add the Microsoft domain administrator account for VSphere authorizations, yet he doesn't see Microsoft Active Directory. I was not able to actually enter the VSphere server again to check this, only through the client, but I suspect it is simply because the VSPhere server has not been added to the domain. It is likely, or would - this something else?
Small follow-up question: If this is the solution, and we add the VSphere server to the domain and it reboot, which will have no effect on the virtual machines running, will it?
I know it's basic, but he came and he was out of the scope of my project.
Thank you.
As far as I know, vCenter is able to authenticate with users of the AD, ESX host require local users!
-
Dear all,
OS - Windows server 2012 R2
version - 11.2.0.1.0
Server: production server
ORA-31693: Data Table object 'AWSTEMPUSER '. "' TEMPMANUALMAPRPT_273 ' failed to load/unload and being ignored because of the error:
ORA-02354: Error exporting/importing data
ORA-00942: table or view does not exist
When taken expdp and faced error mentioned above. but expdp completed successfully with waring as below.
Work "AWSCOMMONMASTER". "" FULLEXPJOB26SEP15_053001 "finished with 6 errors at 09:30:54
(1) what is the error
(2) is there any problem in the dump because file as above of the error. If Yes, then I'll resume expdp.
Please suggest me. Thanks in advance
Hello
I suspect that what has happened, is that demand has dropped a temporary table to during the time that you run the export - consider this series of events
(1) temp table created by application
(2) start expdp work - including this table
(3) the extracted table metadata
(4) the application deletes the table
(5) expdp is trying to retrieve data from the table - and gets the above error.
Just to confirm with the enforcement team that the table is just a temporary thing - it certainly seems it name.
See you soon,.
Rich
-
Ignore the ASO - zero data loads and missing values
Hello
There is an option that ignores the zero values & the missing values in the dialog box when loading data in cube ASO interactively via EAS.
Y at - it an option to specify the same in the MAXL Import data command? I couldn't find a technical reference.
I have 12 months in the columns in the data flow. At least 1/4 of my data is zeros. Ignoring zeros keeps the size of the cube small and faster.
We are on 11.1.2.2.
Appreciate your thoughts.
Thank you
Ethan.
The thing is that it's hidden in the command Alter Database (Aggregate Storage) , when you create the data loading buffer. If you are not sure what a buffer for loading data, see loading data using pads.
-
Essbase in MSCS Cluster (metadata and data load failures)
Hello
Is there a power failure on the active node of the Cluster Essbase (call this node A) and the Cube needs to be rebuilt on the node of Cluster B, how the Cube will be rebuilt on Cluster Node B.
What will orchestrate the activities required in order to rebuild the Cube)? Both Essbase nodes are mounted on Microsoft cluster Services.
In essence, I want to know
(A) Comment do to handle the load of metadata that failed on Node1 to Node2 Essbase?
(B) makes the continuous session to run meta-data / load on the Second knot, Essbase data when the first node of Essbase fails?
Thank you for your help in advance.
Kind regards
UB.
If the failover product then all connections on the active node will be lost as Essbase will restart on the second node, just treat the same as if you restarted the Essbase service and had a metaload running that it would fail to the point when Essbase breaks down.
See you soon
John
-
What LKM and IKM for b/w MSSQL 2005 and Oracle 11 of fast data loading
Hello
Can anyone help to decide what LKMs and IKMs are best for data loading between MSSQL and Oracle.
Staging area is Oracle. Need to load around the lines of 400Million of MSSQL to Oracle 11 g.
Best regards
Muhammad"LKM MSSQL to ORACLE (BCP SQLLDR)" may be useful in your case which uses BCP and SQLLDR to extract and laod of MSSQL and Oracle database.
Please see details on KMs to the http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/ms_sqlserver.htm#BGBJBGCC
-
Diff data loading while doing through SQL and while doing through text file
I have an ASO cube data charges every day morning. Loading the data is automated by MaxL and this MaxL files uses a SQL (against teradata) as the source of data and a State of charge for loading data. 1 week incorrect data return has begun to show upward and nothing has been changed. It's strange when I run the SQL in a teradata assistant and copy the results to a text file and load the data via EAS from the text data source file and the same rule that data file appears on the right. Any ideas on why this is happening. So basically when I use a SQL data source and a particular rule file data seems to be missing, where as during the use of the results of the same SQL copied into a text file and load data into the text file and the same rule file it seems to work. I'm on 11.1.1.4 and in this case only a private citizen of the cube.
Thank you
Ted.Hi Ted, thanks.
Well, you reset the database before each load that takes the properties 'Overwrite' or 'Add' of the equation which is good. And it looks like nothing of going with several buffers (no parallel loading SQL, right?). That really just leaves box "Aggregate use Last" - did you happen to check this? By default applied your MaxL charge would be "Aggregation Sum" (which is the equivalent of not check "Use Last Aggregate").
A_defaut, I would suggest that you add a WHERE clause of your SQL query to zoom right down to one of your 'problem' values (you have not really described what you see error data) and a) load just this intersection and b) see the result of the query in the data prep Editor.
-
lost control file and datafile addeed restore/recovery without loss of data
Here, I tried the following
created a new table called t2 and made sure the data went to a specific tablespace...
has taken a level 0 backup
remove the control file
couple of datafile to above tablespace was added and then insert more data
then went out to restore the database... but datafile and control file always could not be open? What is wrong here...
-wnet to session 2 and renamed datafile for file unammedSQL> @datafile -- list of datafile Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT ---------- -------- --------- --------- ---------- ---------- ---------- -------- ------------------------------ ---------- --- UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf 3 YES USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf 5 YES CNT_TST Datafile ONLINE AVAILABLE 1 9 10 0 /data3/trgt/cnt_tst01.dbf 7 NO SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf 2 YES USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf 4 YES SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf 1 YES USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf 6 YES 7 rows selected. -- new table is created called t2 and its going into TS called cnt_tst SQL> CREATE TABLE TEST.T2 ( C1 DATE, C2 NUMBER, C3 NUMBER, C4 VARCHAR2(300 BYTE) ) TABLESPACE cnt_tst; 2 3 4 5 6 7 8 Table created. -- data inserted SQL> INSERT INTO test.T2 SELECT * FROM (SELECT SYSDATE, ROWNUM C2, DECODE(MOD(ROWNUM,100),99,99,1) C3, RPAD('A',300,'A') C4 FROM DUAL CONNECT BY LEVEL <= 10000) ; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 10000 rows created. SQL> commit; Commit complete. -- to check of cnt_tst has any free space or not, as we can see its full SQL> @datafile Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT ---------- -------- --------- --------- ---------- ---------- ---------- -------- ------------------------------ ---------- --- UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf 3 YES USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf 5 YES SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf 2 YES USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf 4 YES SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf 1 YES USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf 6 YES CNT_TST Datafile ONLINE AVAILABLE 10 0 10 0 /data3/trgt/cnt_tst01.dbf 7 NO 7 rows selected. SQL> select count(*) from test.t2; COUNT(*) ---------- 10000 1 row selected. -- to get a count and max on date SQL> select max(c1) from test.t2; MAX(C1) ------------------ 29-feb-12 13:47:52 1 row selected. SQL> -- AT THIS POINT A LEVEL 0 BACKUP IS TAKEN (using backup database plus archivelog) SQL> -- now control files are removed SQL> select name from v$controlfile; NAME -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /ctrl/trgt/control01.ctl /ctrl/trgt/control02.ctl 2 rows selected. SQL> SQL> ! rm /ctrl/trgt/control01.ctl SQL> ! rm /ctrl/trgt/control02.ctl SQL> ! ls -ltr /ctrl/trgt/ ls: /ctrl/trgt/: No such file or directory SQL> -- new datafile is added to CNT_TST TABLESPACE and new data is added as well SQL> ALTER TABLESPACE CNT_TST ADD DATAFILE '/data3/trgt/CNT_TST02.dbf' SIZE 100M AUTOEXTEND OFF; Tablespace altered. SQL> ALTER SYSTEM CHECKPOINT; System altered. SQL> alter system switch logfile; System altered. SQL> / System altered. SQL> / System altered. SQL> ALTER TABLESPACE CNT_TST ADD DATAFILE '/data3/trgt/CNT_TST03.dbf' SIZE 100M AUTOEXTEND OFF; Tablespace altered. SQL> INSERT INTO test.T2 SELECT * FROM (SELECT SYSDATE, ROWNUM C2, DECODE(MOD(ROWNUM,100),99,99,1) C3, RPAD('A',300,'A') C4 FROM DUAL CONNECT BY LEVEL <= 10000) ; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 10000 rows created. SQL> / 10000 rows created. SQL> commit; Commit complete. SQL> INSERT INTO test.T2 SELECT * FROM (SELECT SYSDATE, ROWNUM C2, DECODE(MOD(ROWNUM,100),99,99,1) C3, RPAD('A',300,'A') C4 FROM DUAL CONNECT BY LEVEL <= 40000) ; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 40000 rows created. SQL> commit; Commit complete. SQL> @datafile -- to make sure new datafile has been registered with the DB Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT ---------- -------- --------- --------- ---------- ---------- ---------- -------- ------------------------------ ---------- --- CNT_TST Datafile ONLINE AVAILABLE 9 91 100 0 /data3/trgt/CNT_TST03.dbf 9 NO UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf 3 YES USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf 5 YES CNT_TST Datafile ONLINE AVAILABLE 9 91 100 0 /data3/trgt/CNT_TST02.dbf 8 NO SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf 2 YES USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf 4 YES SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf 1 YES USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf 6 YES CNT_TST Datafile ONLINE AVAILABLE 10 0 10 0 /data3/trgt/cnt_tst01.dbf 7 NO 9 rows selected. -- now the count and max ... note count before backup was 10000 and max(c1) was diff SQL> select count(*) from test.t2; COUNT(*) ---------- 70000 1 row selected. SQL> select max(c1) from test.t2; MAX(C1) ------------------ 29-feb-12 13:58:25 1 row selected. SQL> -- now restore starts SQL> shutdown abort; ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [oracle@berry trgt]$ rman Recovery Manager: Release 11.2.0.1.0 - Production on Wed Feb 29 14:01:48 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. RMAN> connect catalog rman/pass@rcat connected to recovery catalog database RMAN> connect target / connected to target database (not started) RMAN> startup nomount; Oracle instance started Total System Global Area 188313600 bytes Fixed Size 1335388 bytes Variable Size 125833124 bytes Database Buffers 58720256 bytes Redo Buffers 2424832 bytes RMAN> restore controlfile from autobackup; Starting restore at 29-FEB-12 14:02:37 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=20 device type=DISK recovery area destination: /backup/trgt/flash_recovery_area database name (or database unique name) used for search: TRGT channel ORA_DISK_1: no AUTOBACKUPS found in the recovery area channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120229 channel ORA_DISK_1: AUTOBACKUP found: /backup/trgt/backup/cont_c-3405317011-20120229-09 channel ORA_DISK_1: restoring control file from AUTOBACKUP /backup/trgt/backup/cont_c-3405317011-20120229-09 channel ORA_DISK_1: control file restore from AUTOBACKUP complete output file name=/ctrl/trgt/control01.ctl output file name=/ctrl/trgt/control02.ctl Finished restore at 29-FEB-12 14:02:39 RMAN> alter database mount; database mounted released channel: ORA_DISK_1 RMAN> recover database; Starting recover at 29-FEB-12 14:02:55 Starting implicit crosscheck backup at 29-FEB-12 14:02:55 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=20 device type=DISK Crosschecked 96 objects Finished implicit crosscheck backup at 29-FEB-12 14:02:57 Starting implicit crosscheck copy at 29-FEB-12 14:02:57 using channel ORA_DISK_1 Finished implicit crosscheck copy at 29-FEB-12 14:02:57 searching for all files in the recovery area cataloging files... no files cataloged using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 13 is already on disk as file /redo_archive/trgt/online/redo01.log archived log for thread 1 with sequence 14 is already on disk as file /redo_archive/trgt/online/redo02.log archived log for thread 1 with sequence 15 is already on disk as file /redo_archive/trgt/online/redo03.log archived log file name=/redo_archive/trgt/archive/1_10_776523284.dbf thread=1 sequence=10 archived log file name=/redo_archive/trgt/archive/1_10_776523284.dbf thread=1 sequence=10 archived log file name=/redo_archive/trgt/archive/1_11_776523284.dbf thread=1 sequence=11 archived log file name=/redo_archive/trgt/archive/1_12_776523284.dbf thread=1 sequence=12 archived log file name=/redo_archive/trgt/online/redo01.log thread=1 sequence=13 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of recover command at 02/29/2012 14:02:59 ORA-01422: exact fetch returns more than requested number of rows RMAN-20505: create datafile during recovery RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/redo_archive/trgt/online/redo01.log' ORA-00283: recovery session canceled due to errors ORA-01244: unnamed datafile(s) added to control file by media recovery ORA-01110: data file 9: '/data3/trgt/CNT_TST03.dbf' RMAN> -- wnet to session 2 and renamed datafile from unammed
After before was done, went back to session 1 and I tried recovered the DBSQL> select name from v$datafile; NAME -------------------------------------------------------------------------------- /data/trgt/system01.dbf /data/trgt/sysaux01.dbf /data/trgt/undotbs01.dbf /data3/trgt/move/users01.dbf /data3/trgt/user02.dbf /data3/trgt/users03.dbf /data3/trgt/cnt_tst01.dbf /oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00008 /oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00009 9 rows selected. SQL> alter database create datafile '/oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00008' as '/data3/trgt/CNT_TST02.dbf'; Database altered. SQL> alter database create datafile '/oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00009' as '/data3/trgt/CNT_TST03.dbf'; Database altered. SQL> select name from v$datafile; NAME -------------------------------------------------------------------------------- /data/trgt/system01.dbf /data/trgt/sysaux01.dbf /data/trgt/undotbs01.dbf /data3/trgt/move/users01.dbf /data3/trgt/user02.dbf /data3/trgt/users03.dbf /data3/trgt/cnt_tst01.dbf /data3/trgt/CNT_TST02.dbf /data3/trgt/CNT_TST03.dbf 9 rows selected.
RMAN> recover database; Starting recover at 29-FEB-12 14:06:16 using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 13 is already on disk as file /redo_archive/trgt/online/redo01.log archived log for thread 1 with sequence 14 is already on disk as file /redo_archive/trgt/online/redo02.log archived log for thread 1 with sequence 15 is already on disk as file /redo_archive/trgt/online/redo03.log archived log file name=/redo_archive/trgt/online/redo01.log thread=1 sequence=13 archived log file name=/redo_archive/trgt/online/redo02.log thread=1 sequence=14 archived log file name=/redo_archive/trgt/online/redo03.log thread=1 sequence=15 media recovery complete, elapsed time: 00:00:00 Finished recover at 29-FEB-12 14:06:17 RMAN> alter database open resetlogs; database opened new incarnation of database registered in recovery catalog starting full resync of recovery catalog full resync complete starting full resync of recovery catalog full resync complete RMAN> exit Recovery Manager complete. [oracle@berry trgt]$ [oracle@berry trgt]$ [oracle@berry trgt]$ sq SQL*Plus: Release 11.2.0.1.0 Production on Wed Feb 29 14:07:18 2012 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> alter session set NLS_DATE_FORMAT="dd-mon-yy hh24:mi:ss: 2 SQL> SQL> alter session set NLS_DATE_FORMAT="dd-mon-yy hh24:mi:ss"; Session altered. SQL> select count(*) from test.t2; select count(*) from test.t2 * ERROR at line 1: ORA-00376: file 8 cannot be read at this time ORA-01110: data file 8: '/data3/trgt/CNT_TST02.dbf' SQL> select max(c1) from test.t2; select max(c1) from test.t2 * ERROR at line 1: ORA-00376: file 8 cannot be read at this time ORA-01110: data file 8: '/data3/trgt/CNT_TST02.dbf' SQL> alter database datafile 8 online; alter database datafile 8 online * ERROR at line 1: ORA-01190: control file or data file 8 is from before the last RESETLOGS ORA-01110: data file 8: '/data3/trgt/CNT_TST02.dbf' {code} so what did i do wrong in my recovery that i could not get my data?? how can i avoid this?? and restore my DB? Edited by: user8363520 on Feb 29, 2012 12:24 PM
user8363520 wrote:
so can get us this or can't do?You seem to have:
(a) old version of data through rman backup files
(b) old version of the control file
(c) Backed archived redo logs
(d) recovery archived logs that have been recently generated
(e) current online redo logsTherefore, you should be able to bring back the database in the State wherever it was when you made the abandonment.
I don't do enough laps to be able to cite details on the commands to use (and I often find myself using the command line recovery after that file of rman restore, but the steps you need must be)Take a copy of security of the data current, archived and online redo log files.
Restore the backup control file, data files, and archived recovery logs
Recover the database by using up to cancel backup control file
You have to do 'create datafile' bit as the collection hits the 'missing file' bit
You will need to provide the names of the archived and log files online redo recovery reached their
(although, presumably, you could leave copies of the logs stored in the default location)Concerning
Jonathan Lewis
http://jonathanlewis.WordPress.com
Author: core Oracle -
Explore Windows 7 64 bit slow loading screen and windows welcome does not
Hello guys :D
I had this problem yesterday and look really weird because I use several method to solve for 'Blocking with the Welcome screen' or 'Windows Explorer is not responding. So here's the problem:As I said, I had the windows with the "Welcome" screen stucks But some time later, about 15 minutes that it full load. Well, THIS IS more BIG PROBLEM: my office was black with my cursor! About 10 minutes, this return to normal, BUT the icons are not loaded. And when I click on any folder or right click of mouse, WINDOWS EXPLORER IS NOT the ANSWER (error status code c0000185 InPageCoFire) and need to restart. I restart and once again, does not. My laptop is now like that, turn on, wait 15 minutes and watch "Windows Explorer is not responding" and turns off.It makes me really mad. So I really appreciate for your help. Sorry if this question was asked beforeUh, seems to be still the gel, you can try another way! Enter again the Advanced Boot Options, then choose 'Safe Mode'. Your loading windows basic drivers and services that could help you get into the windows desktop without encountering any problems. Once you're in safe mode, tap Start menu and search for "cmd". Right-click on 'cmd' and select them "run as Administrator". When the command prompt appears, check your disk file system error hard by typing "chkdsk c: /f" (do not type the quotation marks) if "c:" is a drive letter on your Windows 7. Restart your computer, and it checks file system of the disk for errors. When the analysis was finished and reboot, re-enter the Advanced Boot Options, choose "Safe Mode" again. And then open command prompt with administrator, and then type this command "sfc/scannow" (do not type the quotation marks). SFC (System File Checker) can check your file system were healthy or not. Wait until after the analysis. Then, restart your computer. If these tips don't help, update your status question. :)
-
Original title: Windows 7 problem
I will have questions all the time my office. When I try to shut down rather than get the loggin screen the screen starts flashing as trying to load screen and then finally shuts down the computer. I just bought the computer and it runs Windows 7. What could be the problem? Oh I forgot to mention that, after that I start the computer the configuration screen appears saying that windows did not close normally and if I want to start in safe mode.
Hello
Here are a few questions to better understand the issue:
1 made only while off log?
2. what happens when you restart?
3 have you tried with different user accounts?
4 you did changes to the computer before the show?
Let us try the following and see if they help.
Method 1: Check the event viewer for more details on this subject.
Event Viewer is a tool that displays detailed information about important events on your computer.
http://Windows.Microsoft.com/en-us/Windows7/open-Event-Viewer
Method 2: Run the following fix:
There is a delay when you stop, restart or log off a computer that is running Windows 7 or Windows Server 2008 R2
http://support.Microsoft.com/kb/975777
Method 3: Perform the clean boot and check
It may be that a third-party application is causing this issue. Put the system at boot.
To help resolve the error message, you can start Windows Vista or Windows 7 by using a minimal set of drivers and startup programs. This type of boot is known as a "clean boot". A clean boot helps eliminate software conflicts.
How to troubleshoot a problem by performing a clean boot in Windows Vista or in Windows 7
http://support.Microsoft.com/kb/929135
Please note: After troubleshooting, be sure to start your computer in normal mode by following step 7.
Method 4: I suggest you make a system full scan just to be sure.
Here is a link that will give you information on how to perform a full scan of the system:
http://www.Microsoft.com/security/scanner/en-us/default.aspx
, Note 1: The Microsoft Safety Scanner ends 10 days after being downloaded. To restart a scan with the latest definitions of anti-malware, download and run the Microsoft Safety Scanner again.
Note 2: The data files that are infected must be cleaned only by removing the file completely, which means that there is a risk of data loss.
-
On Oracle Apex, is there a feasibility study to change the default feature
1. can we convert the load data Wizard just insert to insert / update functionality based on the table of the source?
2. possibility of Validation - Count of Records < target table is true, then the user should get a choice to continue with insert / cancel the data loading process.
I use APEX 5.0
Need it please advice on this 2 points
Hi Sudhir,
I'll answer your questions below:
(1) Yes, loading data can be inserted/updated updated
It's the default behavior, if you choose the right-hand columns in order to detect duplicate records, you will be able to see the records that are new and those who are up to date.
(2) it will be a little tricky, but you can get by using the underlying collection. Loading data uses several collections to perform the operations, and on the first step, load us all the records of the user in the collection "CLOB_CONTENT". by checking this against the number of records in the underlying table, you can easily add a new validation before moving on to step 1 - step 2.
Kind regards
Patrick
-
problems with the JSON data loading
Hello
I have follow-up Simon Widjaja (EDGEDOCKS) YouTube lesson for the JSON data loading external. But I am not able to connect at least the console database.
I get this error: "error avascript in the handler! Type of event = element.
Content.JSON is located in the folder. Data there are very simple:
[
{
"title": "TITLE 1",
'description': "DESCRIPTION 1"
},
{
"title": "TITLE 2",
'description': "DESCRIPTION 2"
}
]
And here's the code in edgeActions.js:
(function ($, edge, compId) {})
Composition of var = Edge.Composition, symbol = Edge.Symbol; alias for classes of edge commonly used
Edge symbol: "internship."
(function (symbolName) {}
Symbol.bindElementAction (compId, NomSymbole, 'document', 'compositionReady', function (sym, e) {})
external json data loading
$.ajax({)
type: 'GET ',.
cache: false,
URL: "content.json",
data type: 'json ',.
success: function (data) {console.log ("data:", data);},
error: function() {console.log ("something went wrong") ;}}
});
});
End of binding edge
(}) ('step');
End of edge symbol: "internship."
}) (window.jQuery |) AdobeEdge. ($, AdobeEdge, "EDGE-11125477");
I tried $getJSON also as mentioned in the youtube video.
Please note: I do not understand 'something was wrong' also connected.
I use the free trial version. It is a limitation in the free trial version?
Well, same question as here: loading external data using ajax
Cannot run the jQuery file is missing, then $. ajax() or $. getJSON().
You must add the jQuery file as shown below:
See: http://jquery.com/download/
Note: Without loading the jQuery file, you can use these functions: API JavaScript Adobe Edge animate CC
-
Event scripts FDM shot twice during data loads
Here's an interesting question. I added the following three scripts to different event (one at a time, ensuring that one of them is both), clear data before loading to Essbase:
Script event content:
' Declare local variables
Dim ObjShell
Dim strCMD
«Call MaxL script to perform data clear the calculation.»
Set objShell = CreateObject ("WScript.Shell")
strCMD = "D:\Oracle\Middleware\EPMSystem11R1\products\Essbase\EssbaseClient\bin\startMAXL.cmd D:\Test.mxl"
API. DataWindow.Utilities.mShellAndWait strCMD, 0
MaxL Script:
Login * identified by * on *;
run the calculation ' FIX("Member1","Member2") CLEARDATA "Member3"; ENDFIX' on *. *** ;
"exit";
However, it seems that clear is performed twice, both before and after the data has been loaded to Essbase. This has been verified at every step, checking the newspaper of Essbase applications:
No script event:
-No Essbase data don't clear in the application log
Above to add the script to the event "BefExportToDat":
-The script is executed once when you click Export in the customer Web FDM (before the "target load" modal popup is displayed). Entries are visible in the log of Essbase applications.
-Script is then run a second time when you click the OK button in the modal pop-up "target Load System". Entries are visible in the log of Essbase applications.
Above to add the script to the event "AftExportToDat":
-The script is executed once when you click Export in the customer Web FDM (before the "target load" modal popup is displayed). Entries are visible in the log of Essbase applications.
-Script is then run a second time when you click the OK button in the modal pop-up "target Load System". Entries are visible in the log of Essbase applications.
Above to add the script to the event "BefLoad":
-Script only runs that after you click Export in the FDM Web Client (before 'target system load' modal popup is displayed).
-Script is run AFTER loading to Essbase data when the OK button is clicked in the modal popup "load the target system". Entries are visible in the log of Essbase applications.
Some notes on the above:
1. "BefExportToDat" and "AftExportToDat" are both performed twice, before and after the modal popup "target Load System". :-(
2. "befLoad" is executed WHEN the data is loaded to Essbase. :-( :-(
Someone please any idea how we could run a clear Essbase database before the data is loaded, and not after we have charged for up-to-date data? And maybe about why event scripts above seem to be fired twice? It doesn't seem to be any logic to this!
BefExportToDat - entered in the journal Application Essbase:
+ [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1013091) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1013162) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info (1012555) +]
+ Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
+...+
+ [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1003037) +]
Updated load cells [98] data
+ [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1003024) +]
Data load time: seconds [0.52]
+...+
+ [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1013091) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1013162) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1012555) +]
+ Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
AftExportToDat - entered in the journal Application Essbase:
+ [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1013091) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1013162) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info (1012555) +]
+ Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
+...+
+ [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1003037) +]
Updated load cells [98] data
+ [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info (1003024) +]
Data load time: seconds [0.52]
+...+
+ [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1013091) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1013162) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info (1012555) +]
+ Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
BefLoad - entered in the journal Application Essbase:
+ [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1013091) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1013162) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info (1012555) +]
+ Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.
+...+
+ [Sea 16 May 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info (1003037) +]
Updated load cells [98] data
+ [Sea 16 May 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info (1003024) +]
Data load time: seconds [0.52]
+...+
+ [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1013091) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1013162) +]
+ Received order [calculate] user [directory admin@Native] +.
+ [Sea 16 May 16:23:45 2012]Local/Monthly/Monthly/admin@Native 140095860504320/Directory/Info (1012555) +]
+ Erasure of the data in the partition [Member3] with fixed member [Period (Member1); Scenario (Member2)] +.James, the scripts export and the Load event will fire four times, once for each type of file: the. DAT file (main TB file),-A.DAT (log file),-B.DAT and - c.DAT.
To work around this problem, then only run during the loading of the main TB file, add the following or something similar at the beginning of your event scripts. This assumes that strFile is in the list of parameters to the subroutine:
Select Case LCase(Right(strFile,6)) Case "-a.dat", "-b.dat", "-c.dat" Exit Sub End Select
-
All my data have doubled and the lines are slightly compensation and overlapping. Unusable. How to restore the distorted view course content?
Hello Arnold,.
A screenshot of the upper-left corner of your document could help the issue. Include as much as in the example below.
If this section of the table does not display data "doubled and shifted", provide a second screenshot of a section of the same size, showing a sample of the data in question.
Kind regards
Barry
-
Problems loading Google and YouTube in Safari
Someone at - it problems loading Google and YouTube in Safari?
Loading progress bar stalls or, in the case of YouTube, photos and comments sometimes arise. This was an intermittent problem (worse when the awakening of the mode standby). At other times, sites load normally.
I don't have equipment, internet connection or extension problems. I have also cleared cache, history, cookies, etc.
This problem seems to be limited to Google sites (YouTube, etc.). Others have had similar problems and seems to have worsened with update of the last update of Safari (9.0.3) or the operating system to El Capitan (10.11.3).
An idea of the problem or possible solutions?
1. System Preferences > Flash Player > advanced > delete all the
Press the button 'Clear all' under 'Navigation data and settings.
Test now.
2 safari > Preferences > privacy > Cookies and data from the Web site:
Press the button "delete all data from the Web site.
Press the button "Details".
Delete all cookies except Apple, ISP and your banks.
3 safari > Preferences > Extensions
Disable all extensions, restart Safari and test.
Allow one and test.
To uninstall one, select and click the "Uninstall" button
Maybe you are looking for
-
HP Pavilion dv7 Notebook A1L69A: bug likely under Support of HP or even update
Since it does not seem to be a forum for bug so I chose this one. It is an interaction of support framework FYI report on the HP with the Internet Exploder browser, but because I don't use IE has no effect for me. Windows 8.1 latest patches MS start
-
GetTableCellValue "the control is not of the type expected by the function.
I hope you can help me to find out why GetTableCellvalue is a failure.I don't get "the control is not of the type expected by the function.int calPanelId = LoadPanel (PARENT, "gui_main.uir", CAL_PANEL);Point p;p.x = 1;p.y = 1;Double val = - 1.0;State
-
I have a HP Envy 23 d027c All In One with the card mother Lavaca3-SB (pegatron) IPISB-NK. It has a PCI slot available for upgrade of the video. I want to do this. I intend to use the pc for video processing and maybe a few games. How to pass the v
-
Change a video file to a different type
How can I change a video file of FLV. play at something my VCR?
-
Sansa Fuze V2 problems wheel [HELP PLEASE]
So, I recently got my Sansa Fuze. I was very happy with it. Initially, I had some problems, but I sucked in and resolved. Although now, the wheel of my sansa fuze is what I believe to be, die. The drive itself is not even a month, and I do not us