Oracle 8i exporting because of the Chinese oracle 12 c

Hi all

I'll be very grateful if someone can give me some advice. I tested several days to discover the cause of the problem of displaying Chinese characters.


Export version of the Oracle Server (Unix machine)

Oracle8i Enterprise Edition Release 8.1.7.0.0

Select DECODE ("parameter, 'NLS_CHARACTERSET', ' GAME of CHARACTERS",)
"NLS_LANGUAGE', 'LANGUAGE',
"NLS_TERRITORY', 'TERRITORY') name.
value of v$ nls_parameters
Setting WHERE IN ('NLS_CHARACTERSET", 'NLS_LANGUAGE',"NLS_TERRITORY")

Value name
American language
American territory
US7ASCII character

the value of nls_lang = american_america.us7ascii in my machine

I can correctly display traditional Chinese characters in sqlplus

exp userAdm/pwd01@sampleDB FULL = Y LINE = sampleDB.dmp

export success
---------------------------------------------

Version of Oracle server Import
64-bit Oracle Database release 12.1.0.1.0 12 c

I create a new database on the import server (Windows 7)

Value name
American language
American territory
character ZNT16MSWIN95 <- cannot set us7ascii, because Oracle 12 c don't have that choose

the value of nls_lang = american_america.us7ascii before import

IMP userAdm/pwd01@sampleDB file = sampleDB.dmp log = sampleDB.log full = y

import successfully

However, when I set nls_lang = american_america.us7ascii in my machine

I can't display traditional Chinese characters in sqlplus correctly

Also tried after the language, always in vain

Set nls_lang = CHINESE_TAIWAN TRADITIONAL. ZHT16MSWIN950
Set nls_lang = AMERICAN_AMERICA. AL16UTF16
Set nls_lang = american_america. ZHT16MSWIN950

Can someone give me some advice?
Thank you!

V$ NLS_PARAMETERS also shows that the character of database defined then consulted NLS_DATABASE_PARAMETERS will not tell us more.

What you have is a typical configuration of transmission. The binary codes of traditional Chinese characters are stored in the US7ASCII database, but are not recognized as such. These characters entered from Windows clients, which suggests that their real coding is ZHT16MSWIN950?

You cannot directly export/import from the database of US7ASCII 8i to any other database that isn't US7ASCII as well without losing all character codes.

Your choice:

1. create the database of 12 c in the character set US7ASCII. You can do this by calling DBCA directly (not through Installer) and choose the Advanced installation mode. Then you can uncheck the box "Show only recommended character sets", and the US7ASCII will be available. You can also use the manual procedure that is heavy with the CREATE DATABASE statement. Once you have a database of the US7ASCII 12, you can export and import without data loss, condition NLS_LANG is set US7ASCII and continue running your application that you used to.

This choice is conceptually the simplest, but you will continue running in an unsupported configuration, and you will certainly encounter problems sooner or later (usually sooner). I do not recommend this option.

2. create the new database and export/import, as in option 1 above. However, before you run the application, move the character Unicode AL32UTF8 database defined using the Database for Unicode Migration Wizard, which comes with the database Oracle 12 c. Then, your database will be fit for deployment abroad. You may need to set up or change your application to work properly with the AL32UTF8 database. It is the most complex, but the most recommended option.

3. create the new database and export/import, as in option 1 above. However, before running the application, migrate the database to the real character of your data set. This character set must be determined according to what applications/clients/platforms entered data in the 8i database. You make the migration using the Migration Wizard of database for Unicode and the csrepair script. Then you should be able to run your application without change, only after the NLS_LANG setting appropriate for your platform.

Thank you

Sergiusz

Tags: Database

Similar Questions

  • I get an error message when previewing: "1 WARNING: assets/private/var/folders/hm/sf71s7m90n727jhsls9fgsp80000gn/T/cleaning the start-up/InDesign Snippets / Snippet_30BBDDE25.idms was not exported because it is not found." It might have to do with an MPA

    I get an error message when previewing: "1 WARNING: assets/private/var/folders/hm/sf71s7m90n727jhsls9fgsp80000gn/T/cleaning the start-up/InDesign Snippets / Snippet_30BBDDE25.idms was not exported because it is not found." It might have to do with an image that I created in InDesign and exported an earnings per share, then created a PNG out of it. I hope that's it, it's, it's scare me... Thanks for the ideas.

    Please try to recreate a link to the file or try to check if the assets are actually located in the defined path.

    Thank you

    Sanjit

  • Rendering and export failed with an unknown error because of the color of Lumetri?

    I am rendering & export failures with an 'unknown error' code within Premiere Pro and Media Encoder.  I think that the problem is related to the Lumetri color filter that is widely used in all of the timeline.  There are hundreds of clips and each clip has 2-4 color filters Lumetri applied.  Each clip also has the red giant Cosmo and filter FilmConvert Pro 2.36.1 coming to overlap (visually underneath by order of effects filters Panel) all filters from Lumetri.  The reason why I believe that the problem is related to the Lumetri color filter is that it is the first time that I use.  Previously, I used the filter red giant Colorista III coloring, but last week decided to try using the Lumetri color for the same effect.

    Please let me know if anyone has had similar experiences, what were their solutions, or if someone has some ideas on how to solve this problem.  I would really appreciate the help!

    System details:

    Windows Pro 64-bit 10

    Intel i7 5960 X 8-core 3.0GHz (cooled water and too quick to 4.2 GHz)

    64GB 2133 MHz DRAM (non-ECC)

    2 x Nvidia GTX 980 4 GB GPUs (not in SLI)

    Intel 750 PCIe 400 GB SSD (for BONES)

    SSD Pro 1 TB Samsung 840 (for Adobe Cache)

    SSD Pro 1 TB Samsung 840 (for export)

    Matrix RAID 5 of 48 TB (eight 6 TB HDs with MegaRAID, LSI 9361-8i card) (for media)

    Information on the timeline:

    1920 x 1080 (original video files are MP4 UHD scaled down by 50% in the timeline)

    23.976 fps

    CUDA Mercury Render Engine

    Length of the sequence: 14:30

    I have discovered a workaround of sorts.  I noticed that when rendered in PP he would make progress before a failure (the green line that precedes the timeline would get a little longer).  If I saved this progress and rebooted the system, then tried to make it again, it would make a little further.  I've done this dozens of times for 4 hours make completely a 14:30 long sequence.

    I noticed doing this routine that my CPU was maxxed on all rendered.  Resource monitor reported the use of CPU to 143% continuous (all other indicators RAM, disk speed, etc. were well under their maximums).  This brings me to suspect that maybe there is a problem with overheating of the CPU.  But who would be closed for the entire system, correct?

    Once the entire sequence has been made (all the green lines above clips) I tried export by using the "Use preview files" option enabled.  Failure again.  I tried export uncompressed AVI, Quicktimes and PNG sequences.  All have failed.  I tried to export to PP as well as TEA.  The two have not.  I tried to export with option 'To make maximum quality' is enabled.  Surprisingly, this product the longest time encoding managed (32 minutes before failure).  As by chance, using the option of MRQ kept my CPU loads sawtoothing between 100% and 75%, then maybe there's something to the CPU overheating theory?

    In an ultimate attempt desperate to save my video project (looming deadlines!) I divided the sequence of Assembly in five servings (I cut on the melted white or fade-to-black transitions) and exported to each of them.  This copy export times within 20 minutes.  It worked.

    I noticed this another Adobe Forums thread that seems to speak of the similar difficulties with effects of Lumetri in Creative Cloud 2015.  Perhaps the Lumetri effects are not ready for prime time?

    https://forums.Adobe.com/thread/1236018?start=0 & tstart = 0

    Please let me know if anyone has any ideas on this.  In the meantime, I'll wind up this project and never use Lumetri effects again.

    Hello Kevin,

    Thank you for offering help to solve this problem.

    I was able to export the majority of the sequence with all the Lumetri effects, red giant and FilmConvert mixed in clips.  After the sequence using the tedious method described above I segmented the sequence in the quarterfinals; all exported except one.  I divided it into two; Another exported doesn't have.  I cut half too, once again only half exported without failure.  I finally reduced the embarrassing part until a second sequence clips 35; an interview with green screen with superimposed titles and superposition of b-roll clips.

    I removed all the effects of the Lumetri of these clips and replaced by Red Giant Colorista III effects, which are very similar to the effect of Lumetri in the GUI and application.  The sequence made completely without "unknown error."  Then it was exported successfully (and used about 70% when compared to more CPU activity 110% with the Lumetri to export).

    The difference would be that the effect of color of Lumetri has this YUV icon next to it in the effect controls panel?  Which means that YUV?

    The problem is perhaps not totally related to the color of Lumetri effect (as I said, I was able to export the majority of the 14:30 sequences use, although gradually by segmentation of the sequence into smaller portions), but if the move is a solution then I'll use it.

    I lost a day to fix this.  If this thread is useful to someone else, then it was worthwhile.

  • Because of the archivelogs had places in + FRA diskgroup

    Dear Sir.

    Because of the archivelogs had places in + FRA diskgroup

    Thanks in advance,

    IVW

    The quick recovery area is a directory managed by Oracle. Once configured, RMAN would use FRA as the default location for files backup and restore RMAN. Archivelogs and files of RMAN backup effective additional 1 used for the restoration of the database.

    Archivelogs do not need to be stored in FRA and you can specify a different location, but there are several advantages, for example, RMAN automatically identify and find the backup and restore files stored in FRA.

  • MA: termination litigation because of the error 472

    Hi all

    Today our database crashed with error in the alert running on linux 4.6 log db below: 10.2.0.3.0 (32 bit)

    Filled checkpoint up to RBA [0x1730.2.10], RCS: 5979396038739

    Sea may 28 08:56:31 2014

    Errors in the /oracle/clndb/10.2.0/admin/PROD_damacdb/udump/TESTcln_ora_6562.trc file:

    [ORA-07445: exception encountered: core dump [kokcup () + 97] [SIGSEGV] [address not mapped to object] [0xC000095D] []]

    Sea may 28 08:56:32 2014

    Errors in the /oracle/clndb/10.2.0/admin/PROD_damacdb/udump/TESTcln_ora_6562.trc file:

    [ORA-07445: exception encountered: core dump [kghssgmm () + 83] [SIGFPE] [integer division by zero] [0xAD03465] []]

    [ORA-07445: exception encountered: core dump [kokcup () + 97] [SIGSEGV] [address not mapped to object] [0xC000095D] []]

    Sea may 28 08:59:55 2014

    Beginning log switch checkpoint up to RBA [0x1731.2.10], RCS: 5979396047597

    Thread 1 Advanced for you connect to sequence 5937

    Currently Journal # 2 seq # 5937 mem # 0: /d02/clndata/log02a.dbf

    Currently Journal # 2 seq # 5937 mem # 1: /d02/clndata/log02b.dbf

    Sea may 28 09:02:25 2014

    Errors in the /oracle/clndb/10.2.0/admin/PROD_damacdb/udump/TESTcln_ora_2032.trc file:

    [ORA-07445: exception encountered: core dump [kghupr_flg () + 551] [SIGSEGV] [address not mapped to object] [0x10] []]

    Sea may 28 09:02:49 2014

    Errors in the /oracle/clndb/10.2.0/admin/PROD_damacdb/bdump/TESTcln_pmon_23066.trc file:

    [ORA-07445: exception encountered: core dump [kghupr_flg () + 551] [SIGSEGV] [address not mapped to object] [0x10] []]

    Sea may 28 09:02:54 2014

    MA: termination litigation because of the error 472

    Instance terminated by MA, pid = 23070

    Sea may 28 09:08:01, 2014

    Starting ORACLE instance (normal)

    Sea may 28 09:08:01, 2014

    Db_block_buffers parameter system activated without VLM on.

    LICENSE_MAX_SESSION = 0

    LICENSE_SESSIONS_WARNING = 0

    Diagram of SNA picked latch-free 2

    With the help of LOG_ARCHIVE_DEST_1 parameter value by default as /oracle/clndb/10.2.0/dbs/arch

    Autotune undo retention is enabled.

    IMODE = BR

    ILAT = 220

    LICENSE_MAX_USERS = 0

    SYS audit is enabled

    ksdpec: called to the event 13740 before initialization of the event group

    Commissioning ORACLE RDBMS Version: 10.2.0.3.0.

    Parameters of the system with default values:

    _trace_files_public = FALSE

    process = 1000

    sessions = 2000

    TIMED_STATISTICS = TRUE

    resource_limit = TRUE

    shared_pool_size = 838860800

    STREAMS_POOL_SIZE = 50331648

    SHARED_POOL_RESERVED_SIZE = 83886080

    nls_language = American

    nls_territory = America

    nls_sort = binary

    NLS_DATE_FORMAT = DD-MON-RR

    nls_numeric_characters =.,.

    nls_comp = binary

    nls_length_semantics = BYTE

    resource_manager_plan = INTERNAL_PLAN

    control_files = d02/clndata/cntrl01.dbf, d02/clndata/cntrl02.dbf, /d02/clndata/cntrl03.dbf

    db_block_buffers = 150000

    db_block_checksum = TRUE

    DB_BLOCK_SIZE = 8192

    compatible = 10.2.0

    log_buffer = 15276032

    log_checkpoint_interval = 100000

    log_checkpoint_timeout = 1200

    DB_FILES = 550

    db_file_multiblock_read_count = 8

    log_checkpoints_to_alert = TRUE

    dml_locks = 10000

    UNDO_MANAGEMENT = AUTO

    undo_tablespace = APPS_UNDOTS1

    UNDO_RETENTION = 7200

    db_block_checking = FALSE

    O7_DICTIONARY_ACCESSIBILITY = FALSE

    Remote_login_passwordfile = NONE

    audit_sys_operations = TRUE

    session_cached_cursors = 1200

    UTL_FILE_DIR = / usr/tmp, /oracle/clndb/10.2.0/appsutil/outbound/PROD_damacdb

    plsql_code_type = INTERPRETED

    plsql_optimize_level = 2

    JOB_QUEUE_PROCESSES = 2

    _system_trig_enabled = TRUE

    CURSOR_SHARING = TRUE

    parallel_min_servers = 0

    PARALLEL_MAX_SERVERS = 8

    background_dump_dest = /oracle/clndb/10.2.0/admin/PROD_damacdb/bdump

    user_dump_dest = /oracle/clndb/10.2.0/admin/PROD_damacdb/udump

    max_dump_file_size = 20480

    core_dump_dest = /oracle/clndb/10.2.0/admin/PROD_damacdb/cdump

    session_max_open_files = 20

    db_name = TESTCLN

    open_cursors = 1500

    os_authent_prefix =

    _sort_elimination_cost_ratio = 5

    sql92_security = TRUE

    _b_tree_bitmap_plans = FALSE

    _fast_full_scan_enabled = FALSE

    _sqlexec_progression_cost = 2147483647

    _like_with_bind_as_equality = TRUE

    pga_aggregate_target = 4194304000

    workarea_size_policy = AUTO

    aq_tm_processes = 1

    PMON started with pid = 2, OS id = 4457

    PSP0 started with pid = 3, OS id = 4459

    MA started with pid = 4, OS id = 4461

    DBW0 started with pid = 5, OS id = 4464

    LGWR started with pid = 6, OS id = 4466

    CKPT started with pid = 7, OS id = 4468

    SMON started with pid = 8, OS id = 4470

    RECCE has started with pid = 9, OS id = 4472

    CJQ0 started with pid = 10, OS id = 4474

    MMON started with pid = 11, OS id = 4476

    MMNL started with pid = 12, OS id = 4478

    Sea may 28 09:08:03 2014

    ALTER DATABASE MOUNT

    Sea may 28 09:08:07 2014

    Definition of embodiment of recovery target 2

    Sea may 28 09:08:07 2014

    Mount of redo thread 1, with mount id 3266286451

    Sea may 28 09:08:07 2014

    Database mounted in exclusive Mode

    Completed: ALTER DATABASE MOUNT

    Sea may 28 09:08:07 2014

    ALTER DATABASE OPEN

    Sea may 28 09:08:08 2014

    Beginning of thread 1 crash recovery

    parallel recovery started with 3 process

    Sea may 28 09:08:09 2014

    Scan again started

    Sea may 28 09:08:09 2014

    Complete scan again

    5831 redo blocks, blocks of 402 data need recovery reading

    Sea may 28 09:08:09 2014

    Request for reinstatement has started

    Thread 1: logseq 5936, block 8590

    Sea may 28 09:08:09 2014

    Online Redo Log recovery: thread 1 mem Group 1 Seq 5936 reading 0

    Mem # 0: /d02/clndata/log01a.dbf

    MEM # 1: /d02/clndata/log01b.dbf

    Sea may 28 09:08:10 2014

    Online Redo Log recovery: thread 1 mem Group 2 Seq 5937 reading 0

    Mem # 0: /d02/clndata/log02a.dbf

    MEM # 1: /d02/clndata/log02b.dbf

    Sea may 28 09:08:10 2014

    Completed request

    Sea may 28 09:08:10 2014

    Finished in crash recovery

    Thread 1: logseq 5937, block 4026, RCS 5979396068960

    402 data blocks read, 387 written data blocks, 5831 redo blocks read

    Sea may 28 09:08:11 2014

    Thread 1 Advanced for you connect to sequence 5938

    Thread 1 is open to the sequence of journal 5938

    Currently journal # 1, seq # 5938 mem # 0: /d02/clndata/log01a.dbf

    Currently journal # 1, seq # 5938 mem # 1: /d02/clndata/log01b.dbf

    Opening of redo thread 1

    Sea may 28 09:08:11 2014

    View MTTR is disabled, because FAST_START_MTTR_TARGET is not defined

    Sea may 28 09:08:11 2014

    Additional checkpoint up to RBA to RBA [0x1732.3.0] [0x1732.3.0], current journal line

    Sea may 28 09:08:13 2014

    Successfully onlined Undo Tablespace 368.

    Sea may 28 09:08:13 2014

    SMON: enabling the recovery of tx

    Sea may 28 09:08:13 2014

    Database charset is UTF8

    off replication_dependency_tracking (no replication multimaster async found)

    From QMNC background process

    QMNC started with pid = 17, OS id = 4490

    Sea may 28 09:09:16 2014

    Completed: ALTER DATABASE OPEN

    Trace file

    /Oracle/clndb/10.2.0/Admin/PROD_damacdb/udump/TESTcln_ora_6562.TRC

    Oracle Database 10 g Enterprise Edition release 10.2.0.3.0 - Production

    With partitioning, OLAP and Data Mining options

    ORACLE_HOME = /oracle/clndb/10.2.0

    Name of the system: Linux

    Name of the node: TESTclonedb.damacerp.com

    News Release: 2.6.9 - 67.ELsmp

    Version: #1 SMP Wed Nov 7 13:58:04 UTC 2007

    Machine: i686

    Instance name: TESTCLN

    Redo thread mounted by this instance: 1

    Oracle process number: 317

    Unix process pid: 6562, image: [email protected]

    2014-05-28 08:56:31.820

    ACTION NAME: (9287) 2014-05-28 08:56:31.696

    MODULE NAME:(xdo.oa.template.server.TemplatesAM:R) 2014-05-28 08:56:31.696

    SERVICE NAME: (TESTCLN) 2014-05-28 08:56:31.696

    SESSION ID: (1783.20135) 2014-05-28 08:56:31.696

    Exception signal: 11 (SIGSEGV), code: 1 (address not mapped to the object), ADR: 0xc000095d PC: [0x98b338b, kokcup () + 97]

    Records:

    EAX: 0x00000000 ebx %: 0x794034b4% ecx: 0x794034b4

    % edx: 0xc0000929% edi: esi % 0x0cd3a9e0: 0xb72da9e8

    % esp: 0xbfffb9dc % ebp: eip % 0xbfffb9e8: 0x098b338b

    % efl: 0 x 00210246

    kokcup () + 86 (0x98b3380) mov 0 x 24 (% ebx), % edx

    kokcup () + 89 testb (0x98b3383) 0 x $1, % dl

    kokcup () + 94 (0x98b3388) mov 0 x 18 (% ebx), % edx

    > kokcup () + 97 (0x98b338b) movzw 0 x 34 (% edx), % edx

    kokcup () + 101 (0x98b338f) test % edx, % edx

    kokcup () + 103 (0x98b3391) I 0x98b33a8

    kokcup () + 105 (0x98b3393) cmp $0xffff, % edx

    kokcup () + 111 (0x98b3399) jge 0x98b33a8

    2014-05-28 08:56:31.832

    ksedmp: internal or fatal error

    [ORA-07445: exception encountered: core dump [kokcup () + 97] [SIGSEGV] [address not mapped to object] [0xC000095D] []]

    -Call trace stack memory-

    call call entered the argument values in hex

    location point type (? means dubious value)

    -------------------- -------- -------------------- ----------------------------

    ksedst () + 27 call ksedst1() 1? 1?

    ksedmp () + 557 call ksedst() 1? 8168F10? B748BF3C?

    B80650? 10? B7491A90?

    ssexhd () + 882 call ksedmp() 3? 98B338B? 636B6F6B?

    29287075? 37392B? 0?

    kokcup () + 97 00000000 B signal? B748DC90? B748DD10?

    kocdsfr () + 460 call 00000000 CD3A9E0? B72DA9E8? 1?

    B71627C8? B716152C?

    CCEAB18?

    Hello

    If you have a valid license of Oracle support, I suggest that you raise a service request with the support of the Oracle.

    This could probably be a bug and it is determined to be a bug, you must upgrade your database to a version more supported, since 10.2 is out of taken in charge for some time.

    Kind regards

    Suntrupth

  • Commas and periods are different in Excel export than shows the dashboard

    I have a user in Spain who is having this problem:

    (A) on his dashboard, a value is displayed like this:

    1.234,56

    BUT...

    (B) where it exports in Excel, the value comes out like this:

    1,234.56

    (Notice the difference in the comma and the period between A & B).

    I am from the United States, so I that value always see as A, but it seems that in Spain the user always wants the value to display as B.

    All the world has already seen this problem? If so, how could it be fixed?

    Hello

    Also look for oracle support doc

    OBIEE 11 g: how to control thousand and Decimal Separator when you export an analytics report to Excel (Doc ID 1452362.1)

    (mark this message as useful or appropriate if that helps!)

    Kind regards

    -DM

  • put an end to litigation because of the error 119

    System: Oracle 11 g 2 under Oracle VM.

    Question: Oracle falls after the LISTENER. ORA is deleted or corrupted. Any reason?

    Measures: used *.ora files of backup performed during the healthy state and dbstart works fine now.

    But resolved Question: dbstart depends on the availability of files *.ora? What is this error 119 (could not find info on it)?

    Thank you.

    f0c7e0f7-DAB5-4B63-a36c-04b7e8eb552d wrote:

    Hmm...

    Before even that I performed the steps suggested, DB and the LISTENER are rising.

    The issue was when I deleted lisetner.ora and stop the virtual machine (guest OS and DB) and to make it appear the oracle, it throws error 119 dbstart script.

    1. without EARPHONE. ORA in place, "dbstart" script fails to implement oracle services. Why?

    2 put an end to litigation because of the error 119. What does this error mean everything.

    Thank you

    SPFile contains errors.

    Edit the pfile & delete line involving "LISTENER_ORCL."

    then do as below

    sqlplus

    / as sysdba

    Create spfile from pfile;

    startup

  • How to export data from the table with the colouring of cells according to value.

    Hi all

    I use jdeveloper 11.1.1.6

    I want to export data from the table with a lot of formatting. as for color cells based on value and so much. How to do this?

    You can find us apache POI-http://poi.apache.org/

    See this http://www.techartifact.com/blogs/2013/08/generate-excel-file-in-oracle-adf-using-apache-poi.html

  • APEX export report rerun the query

    Hi all!

    Version-
    Oracle 11.2
    Apex 4.1

    We have a report of Apex that is generated by calling a function. When the report is displayed in the area, when you click Export to export to CSV, the query is executed again. The report should not be exported without rerunning the query?
    Y at - it all parameters must be configured to disable the query to new while exporting a report or it comes to APEX?

    Thanks in advance.

    -SS.

    Export a report is a separate application and it will get the requested data and create a CSV file out of it. Thus, the query will always to run once more. The report that you view on your page will always get only that number of lines to the customer as requested in the paging and the condition in your report can ask many more lines.

    Denes Kubicek
    -------------------------------------------------------------------
    http://deneskubicek.blogspot.com/
    http://www.Apress.com/9781430235125
    http://Apex.Oracle.com/pls/OTN/f?p=31517:1
    http://www.Amazon.de/Oracle-Apex-XE-Praxis/DP/3826655494
    -------------------------------------------------------------------

  • Invalid environment because of the previous exception

    Hi, my program that runs a long time suddenly occur an error, the following message appears:
    java.lang.RuntimeException: invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 46 lsn = 0 x 0/0x205c69 logSource=com.sleepycat.je.log.FileHandleSource@568ee2b9
    + wisers.crawler.batch.store.ArticleStoreImpl.contains(ArticleStoreImpl.java:63) +.
    + wisers.crawler.batch.extractor.ListingExtractor.extract(ListingExtractor.java:93) +.
    + wisers.crawler.batch.processor.CrawlWorkerImpl.execute(CrawlWorkerImpl.java:276) +.
    + wisers.crawler.batch.schedule.JobManager$ Runner$ 1.run(JobManager.java:73) +.
    + java.util.concurrent.Executors$ (Executors.java:441) RunnableAdapter.call +.
    + java.util.concurrent.FutureTask$ (FutureTask.java:303) Sync.innerRun +.
    + java.util.concurrent.FutureTask.run(FutureTask.java:138) +.
    + java.util.concurrent.ThreadPoolExecutor$ (ThreadPoolExecutor.java:886) Worker.runTask +.
    + java.util.concurrent.ThreadPoolExecutor$ (ThreadPoolExecutor.java:908) Worker.run +.
    + java.lang.Thread.run(Thread.java:662) +.
    Caused by: invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 46 lsn = 0 x 0/0x205c69 logSource=com.sleepycat.je.log.FileHandleSource@568ee2b9
    + com.sleepycat.je.log.LogEntryHeader. (LogEntryHeader.java:94) +.
    + com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:699) +.
    + com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:664) +.
    + com.sleepycat.je.tree.IN.fetchTarget(IN.java:1215) +.
    + com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2320) +.
    + com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1427) +.
    + com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1573) +.
    + com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1499) +.
    + com.sleepycat.je.cleaner.UtilizationProfile.getObsoleteDetail(UtilizationProfile.java:953) +.
    + com.sleepycat.je.cleaner.FileProcessor.processFile(FileProcessor.java:326) +.
    + com.sleepycat.je.cleaner.FileProcessor.doClean(FileProcessor.java:233) +.
    + com.sleepycat.je.cleaner.FileProcessor.onWakeup(FileProcessor.java:138) +.
    + com.sleepycat.je.utilint.DaemonThread.run(DaemonThread.java:141) +.
    +     ... more 1 +.


    After that, I use com.sleepycat.je.util.DbVerify try to examines the identified database to find errors, the following information:
    Checking database wm_cyolcn
    Tree for wm_cyolcn control
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x205e59 parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d30382f31392f636f6e74656e745f343739363335312e68746d http://health.cyol.com/content/2011-08/19/content_4796351.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x23110a parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d30382f31392f636f6e74656e745f343739363335332e68746d http://health.cyol.com/content/2011-08/19/content_4796353.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x230efd parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d30382f31392f636f6e74656e745f343739363935342e68746d http://health.cyol.com/content/2011-08/19/content_4796954.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x235d78 parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d30382f31392f636f6e74656e745f343739373039382e68746d http://health.cyol.com/content/2011-08/19/content_4797098.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x235b6b parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d30382f32392f636f6e74656e745f343833323435352e68746d http://health.cyol.com/content/2011-08/29/content_4832455.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x343eda parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31302f32362f636f6e74656e745f353037363133322e68746d http://health.cyol.com/content/2011-10/26/content_5076132.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x343d7c parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f32302f636f6e74656e745f353338383837322e68746d http://health.cyol.com/content/2011-12/20/content_5388872.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4004f2 parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f32322f636f6e74656e745f353430353431352e68746d http://health.cyol.com/content/2011-12/22/content_5405415.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0 / 0 x 400394 IN parent = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f32322f636f6e74656e745f353430383033302e68746d http://health.cyol.com/content/2011-12/22/content_5408030.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4026d3 parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f32322f636f6e74656e745f353430383033322e68746d http://health.cyol.com/content/2011-12/22/content_5408032.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4069fd parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f32362f636f6e74656e745f353432383330312e68746d http://health.cyol.com/content/2011-12/26/content_5428301.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4067f0 parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f32362f636f6e74656e745f353432383431302e68746d http://health.cyol.com/content/2011-12/26/content_5428410.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x40cd5e parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f33302f636f6e74656e745f353436313733372e68746d http://health.cyol.com/content/2011-12/30/content_5461737.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4110dd parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031312d31322f33312f636f6e74656e745f353436393237382e68746d http://health.cyol.com/content/2011-12/31/content_5469278.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4136bf parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    Error touch 687474703a2f2f6865616c74682e63796f6c2e636f6d2f636f6e74656e742f323031322d30312f30362f636f6e74656e745f353439313038392e68746d http://health.cyol.com/content/2012-01/06/content_5491089.htm
    UNKNOWN error data
    Encountered error (continuous):
    com.sleepycat.je.DatabaseException: (I 3.3.82) fetchTarget 0 x 0/0x4738d2 parent IN = 2281 IN class = com.sleepycat.je.tree.BIN lastFullVersion = 0 x 0/0x4adf10 parent.getDirty () = State false = 0 invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    UNKNOWN error key
    UNKNOWN error data
    numBottomInternalNodes = 103
    level 1: count = 103
    numInternalNodes = 3
    level 2: count = 2
    level 3: count = 1
    numLeafNodes = 6620
    numDeletedLeafNodes = 0
    numDuplicateCountLeafNodes = 0
    mainTreeMaxDepth = 3
    duplicateTreeMaxDepth = 0

    Invalid environment because of the previous exception: com.sleepycat.je.log.DbChecksumException: (I 3.3.82) read the type of invalid log entry: 108 lsn = 0 x 0/0x205e59 logSource=com.sleepycat.je.log.FileHandleSource@5210f6d3
    to com.sleepycat.je.log.LogEntryHeader. < init > (LogEntryHeader.java:94)
    at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:699)
    at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:664)
    at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1215)
    at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2320)
    at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1427)
    at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1573)
    at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1499)
    at com.sleepycat.je.dbi.DatabaseImpl.walkDatabaseTree(DatabaseImpl.java:1355)
    at com.sleepycat.je.dbi.DatabaseImpl.verify(DatabaseImpl.java:1303)
    at com.sleepycat.je.util.DbVerify.verifyOneDbImpl(DbVerify.java:371)
    at com.sleepycat.je.util.DbVerify.verify(DbVerify.java:275)
    at com.sleepycat.je.util.DbVerify.main(DbVerify.java:94)
    Exit code = false

    I want to know, the db file can repair? And how do I?
    Thank you very much!

    I hope that the DbDump and DbLoad utilities willl help recover you and reload your data. Start at http://docs.oracle.com/cd/E17277_02/html/GettingStartedGuide/commandlinetools.html#DbDump, or with DbDump http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/util/DbDump.html and first try the-r, then the options - R.

  • PMON (ospid: 8143): put an end to litigation because of the error 472

    Hi friends,

    _ My environment

    OS: oel4u5
    Apps Version: R12.1.1
    DB version: * 11.1.0.7 *.

    When I am trying to start my 11 g db database ebiz began and its automatically goes down because of termination PMON.  When looking for the log files alerts its showing the error ora-600 below.

    Recovery of the recovery (PMON) file 4 block 180627 block
    Recovery of 125 logseq block, block 70 to 10132192035480 SNA
    Online Redo Log recovery: thread 1 mem Group 2 Seq 125 reading 0
    Mem # 0: /ebiz/oracle/db/apps_st/data/log2.dbf
    Recovery of block completed in rba 125.281.16, RCS 2359.364184220
    Errors in the /ebiz/oracle/db/tech_st/11.1.0/admin/TEST05_erptest05/diag/rdbms/test05/TEST05/trace/TEST05_pmon_8143.trc file (incident = 49647):
    [ORA-00600: internal error code, arguments: [4194], [41], [37] [] [], [], [], [], [], [], []]
    Errors in the /ebiz/oracle/db/tech_st/11.1.0/admin/TEST05_erptest05/diag/rdbms/test05/TEST05/trace/TEST05_pmon_8143.trc file:
    [ORA-00600: internal error code, arguments: [4194], [41], [37] [] [], [], [], [], [], [], []]
    PMON (ospid: 8143): put an end to litigation because of the error 472
    Instance terminated by PMON, pid = 8143

    Error message depending on what I found in the trace file /ebiz/oracle/db/tech_st/11.1.0/admin/TEST05_erptest05/diag/rdbms/test05/TEST05/trace/TEST05_pmon_8143.trc

    (call) sess: 5f1efd10, 5f1e5ff0, usr 5f1efd10 rec news; depth: 0
    ksudlc FALSE to the location: 6
    (table k2g)
    error detected in 472 background processes
    [ORA-00600: internal error code, arguments: [4194], [41], [37] [] [], [], [], [], [], [], []]
    2011-11-27 20:19:33.113
    PMON (ospid: 8143): put an end to litigation because of the error 472

    Please suggest me

    Thank you
    Flo

    Hi Helios,

    We have identified that the error was to cancel the corruption and we followed the steps below for more come above mentioned error.

    Steps, as follows,

    Step 1:
    SQL > SELECT name, value OF v$ parameter WHERE name IN ('undo_management', "undo_tablespace");

    VALUE NAME
    -------------------- --------------------
    UNDO_MANAGEMENT MANUAL
    undo_tablespace UNDO_TBS

    Step 2:
    SQL > select FILE_NAME, TABLESPACE_NAME dba_data_files where nom_tablespace like '% UNDO;

    FILE_NAME, TABLESPACE_NAME
    ----------------------------------------------------------------
    /EBiz/Oracle/DB/apps_st/data/undotbs_02.dbf UNDO_TBS
    /EBiz/Oracle/DB/apps_st/data/undotbs_01.dbf UNDO_TBS

    Step 3: Create a new undo tablespace
    SQL > create UNDO tablespace UNDOTBS datafile ' / ebiz/oracle/db/apps_st/data/undotbs01.dbf' size 1024 m REUSE AUTOEXTEND ON NEXT 4096 K MAXSIZE 1024 M;
    Created tablespace.

    Step 4:
    SQL > ALTER SYSTEM SET undo_tablespace = 'UNDOTBS' scope = spfile;
    Modified system.

    Step 5: set old undo tablespace Offin mode and drop
    SQL > ALTER TABLESPACE UNDO_TBS offline;
    Tablespace altered.

    SQL > drop tablespace UNDO_TBS including content and data files;
    Tablespace has fallen.

    Step 6:
    Rebounced db services

    Step 7: Changed the management of undo for AUTO setting
    SQL > alter system set undo_management = 'AUTO' scope = spfile;
    Modified system.

    SQL > SELECT name, value OF v$ parameter WHERE name IN ('undo_management', "undo_tablespace");

    VALUE NAME
    -------------------- --------------------
    UNDO_MANAGEMENT AUTO
    undo_tablespace UNDOTBS

    Now, the database is running without problem and we don't find any ora errors in the alert log file

    Thank you
    Flo

  • "Backup canceled because all the files were ignored"

    Hello

    I'm performing backup of database by using the following command:

    Run {}
    allocate channel c1 device type disk;
    database backup incremental level 0 more entered archivelog delete;
    output channel c1;
    Validate restore them database;
    }

    While they inspected the RMAN newspapers, it ends, and I can say that it is a good backup. I can also list the components of backup that has been backed up. After this step, I want the flash_recovery_area on a backup tape, I used this command:

    Run {}
    allocate channel t1 device type sbt;
    BACKUP RECOVERY AREA;
    REMOVE the TYPE of DEVICE OBSOLETE sbt;
    output channel t1;
    }

    When I do that, I get this message from the newspapers, and I see not all tape backup:

    -NEWSPAPERS-
    allocated channel: t1
    channel t1: SID = 503 device type = SBT
    channel t1: Oracle Secure Backup

    From 23 July 11 backup
    Specification does not match any newspaper archived in the recovery catalog
    Specification does not match any data file copy in the repository
    Specification does not match any backupset in the repository
    backup cancelled because all the files were ignored
    Backup finished July 23, 11

    RMAN retention policy apply to the order
    RMAN retention policy is set to 7 days recovery window
    not found obsolete backups

    output channel: t1

    RMAN >

    I don't understand why I get this message when in fact, I archivelog and datafile backupset saved in my FRA. Any ideas?

    Thank you!

    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK to 'C:\oracle\flash_recovery_area\PROD\AUTOBACKUP\%d-%F ';
    CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2 G FORMAT 'C:\oracle\flash_recovery_area\PROD\BACKUPSET\%d-%T-%U ';

    When you specify a FORMAT, Oracle takes this format as the backup location. It may happen to be physically identical to your FRA, but it is not logically the same as your FRA.
    A BACKUP RECOVERY AREA is not 'find' these backups because they are not logically in the FRA.

    You will also find that Oracle automatic management of the FRA (where it automatically removes OBSOLETE etc backups) will not work against these backuppieces - once again, because they are "not in the FRA.

    A good backup RMAN that goes to the FRA relies on 'db_recovery_file_dest' (and "db_recovery_file_dest_size"). This is set up, a simple "BACKUP DATABASE" (or "BACKUP AS COMPRESS BACKUPSET DATABASE") will create BackupSets in the FRA. It will automatically create the appropriate folder structure (each day is a new folder). Controlfile AutoBackups, too, should use the FRA.

    See the Oracle support Article: 'configuration correctly the box Flash recovery to allow the release of recoverable space [ID 316074.1] ".

    Hemant K Collette

  • Log archiving stuck because of the issue of space

    Hi all

    I'm on oracle 10G, OS: Linux


    Archive logs have been generated and suddenly, because of the issue of space, they don't since I ran out of space.
    Now, I have increased the space... The archiving log starts automatically now?

    If this isn't the case, then what is the procedure I should follow to start their.


    Thank you
    Kkukreja

    the archives will be lance automatically. If archiving is not the case, your database will be hooked... every time that your database is in archive log mode is automatically activated without having to enable again & new
    Check the setting of log_archive_dest_state , that is by default enabled.
    no need to worrry...

  • instance is down because of the 4031 error

    to the CCR (10.2.0.4, AIX, 64-bit), 2 instance is down, see the altert log,.

    ORA-04031: impossible to allocate 8416 bytes of memory ('pool', 'unknown object', ' sga heap (1.0) ","Great response KSXR queue") shared

    LCK0: end of litigation because of the 4031 error


    (s/n broungt upward.)


    (check the associated tracks)
    =================================
    Start information of diagnosis 4031
    =================================
    The following information helps Oracle in the diagnosis of
    causes of errors ORA-4031. This follow-up may be disabled
    by setting the init.ora = 0 _4031_dump_bitvec
    =====================================
    Allocation request summary information
    =====================================
    Current setting of the information: 04014fff
    SGA heap Dump interval = 3600 seconds
    Dump interval = 300 seconds
    Dump last time = 2010-09-19 15:18:42
    Dump Count = 1
    Application for: large KSXR response queue
    TAS: 700000010036770, size: 8416
    ******************************************************
    Segment of memory HEAP DUMP = name ' sga heap (1.0) "desc = 700000010036770
    extent sz = 0xfe0 alt = 216 het = 32767 rec = 9 = - 126 opc = 0 flg
    parent = 0 = 0 = 0 = 0 xsz nex owner x 1000000
    lock all 1 of 2
    enabled for this bunch of times
    pellets reserved for root 0 (16777216 granule size)
    ....

    check the memory,


    SQL > show parameter shared_

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    hi_shared_memory_address integer 0
    whole max_shared_servers
    shared_memory_address integer 0
    SHARED_POOL_RESERVED_SIZE large integer 15938355
    whole large shared_pool_size 304M
    whole shared_server_sessions
    SHARED_SERVERS integer 0

    SQL > show parameter sga_target

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    Whole large SGA_TARGET 7G

    SQL > show parameter cursor_sharing

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    CURSOR_SHARING EXACT string

    ###############

    the reason seems to surface is shared_pool is not big enough, but we can see that the LMS is large enough,.
    Thus, it may be shared pool fragmentation problem.

    In addition, the instance to the low resaon, is due to

    closing the instance because of the 4031 error




    the question here is:
    1. What is the reasoan for example the stop?


    2. How do I prevent it from happening again? We need to run RAS periodically shared pool or cursor_sharing simailar or force?


    Thank you

    Hello

    Look at what suggest you V$ SHARED_POOL_ADVICE.

    See if you can

    Use the INVALIDHTOMEH package to pin large packages.
    Try to reduce the use of shared memory.

    Kind regards

  • Export DataPump with the query option

    Hi all

    My environment is IBM AIX, Oracle 10.2.0.4.0 database.

    I need a few sets of records using a query in export production. Request is attached to several tables. Since we have the BLOB data type, we export using datapump.

    We have weaker environments, but have not the same set of data and tables, and therefore not able to simulate the same query in lower environment. But created a small table and faked the query.

    My order is

    expdp system / < pwd > @orcl tables = dump.dump1 query = dump.dump1:' ' where num < 3 ' ' directory = DATA_PUMP_DIR dumpfile = exp_dp.dmp logfile = exp_dp.log

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.
    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?
    (3) given that I run with the use of the system, apart from 2 rows, export all data. We must send the dump file to the other Department. We should not export all of the data other than the query output.
    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    Your answers will be more useful.

    The short answer is 'YES', he did the right thing.

    The long answer is:

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.

    Yes. As long as you query is correct. DataPump will export on the lines that match this query.

    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?

    Estimate is made using the full picture. Since you specify, he used the method of estimation of block. Basically, how many blocks have been attributed to this table. In your case, I guess it was 80KB.

    (3) given that I run with the use of the system, apart from 2 rows, export all data. We need to send the dump file to other > Department. We should not export all of the data other than the query output.

    I will export all the data, but going to export metadata. It exports the table definition, all indexes on it, all the statistics on tables or indexes, etc. This is why the dump file could be bigger. There is also a 'main' table that describes the export job who gets exproted. This is used by export and import to find what is in the dumpfile, and where in the dumpfile these things are. It is not user data. This table needs to be exported and will take place in the dumpfile.

    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    If you only want this table, then you order export is right. If you want to export more, then you need to change your export command. From what you say, it seems that you order is correct.

    If you do not want any expoirted metadata, you can add:

    content = data_only

    at the command line. This will only export the data and when the dumpfile is imported, it must have the table already created.

    Dean

Maybe you are looking for