Zombie process in the database.

Dear all,

We use the database to 10g with a JAVA-based application. Our team DBA complains that there is a lot of Zombie process is created in DB, this consumes large capacity.

The term "Zombie" is new to me. I searched in Google and in metalink, but do not have clear ideas. Ask the gurus to help identify problems. The questions in my mind are...
1. How is dangerous Zombie process.
2. is it possible to know how they are created? (What query? / what program? / process who?...)
3. What is the solution to avoid the zombies are created?

Here are the details of the DB.
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
"CORE     10.2.0.4.0     Production"
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

----------------------------------------------------------------------

[oracle@csfproddb ~]$top
top - 07:36:00 up 117 days,  3:25,  0 users,  load average: 0.19, 0.18, 0.10
Tasks: 222 total,   3 running, 204 sleeping,   0 stopped,  15 zombie
Cpu(s): 12.0%us,  1.0%sy,  0.0%ni, 82.0%id,  4.9%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   8176308k total,  8129636k used,    46672k free,   136236k buffers
Swap: 10241428k total,    99300k used, 10142128k free,  7044488k cached

oracle    2341  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle    2342  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle    2895  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle    3833  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle    4510  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle    4531  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle    9294  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle   13063  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle   16035  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle   16880  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle   17080  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
root     18190     1  0  2011 ?        00:01:06 [emcpdefd]
oracle   19392  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle   23499  4284  0 Jan31 ?        00:00:00 [zip] <defunct>
oracle   32036  4284  0 07:32 ?        00:00:00 [zip] <defunct>
oracle   32244  4284  0 07:32 ?        00:00:00 [zip] <defunct>
Thanks in advance

Published by: 884476 on January 31, 2012 21:48

884476 wrote:

As you can see in my original post, there are several zombies are created from a single parent.
Will it be dangerous?

N ° simply, it occupies a site in the process of the kernel (memory) table and prevents the kernel to reuse the identifier of the missing child process.

Nor are major issues.

How to know what causes these processes to get created?

You should ask the parent process. He used a kernel (such as execl) call to ask the kernel to create the child process.

It also helps parents behaving incorrectly (with a signal handler OK to harvest child termination signals) which causes the child deceased process entries stay in the process table.

If the child process are written user of the code, the code can use the setsid() kernel call to detach the child process of the parent process. The child process will then operate under the kernel directly - the init process (pid 1) will be the parent. And this process will periodically collect the child signals and remove entries from obsolete process he possesses.

However none of this comes from an Oracle - so much a problem with either the languages SQL and PL/SQL (this forum topic).

I suggest that if you want more details on programming Linux, the kernel API and sys admin, you raise your issues in the {forum: id = 822} forum.

Tags: Database

Similar Questions

  • What process in the database is high in the Oracle system resources

    Hello everybady,

    I am beginner.

    How can we control or analize what process in the database is high in the Oracle database system resources?

    Could you please help this issue?

    Thanks and greetings



    A tuba.

    Please read SQL and PL/SQL FAQ

    ------------------
    Sybrand Bakker
    Senior Oracle DBA

  • Update page APEX when executing process on the database

    Hello

    I create a package on the database, which performs certain actions. These actions can be considered in different steps in a workflow, but this package does that automatically. Would be nice, if the user gets information about what happens during the process execution.

    So my question is: is it possible to update the information on a page of the APEX, during execution of the process?
    I am thinking something like this:

    Step 1: done
    Step 2: done
    Step 3: processing...
    Step 4:
    Step 5:

    While the substantive process on the database is running.

    Is it possible, and if so, how has been done?

  • If you can get the steps as different records (for example if they are inserted in a table or a lined pipe function can return these documents), then create a new SQL report based on an SQL query, fetch records (possibly ordering him by any column to store these ID information or insert the date)
  • Then add a JS code that refreshes the report automatically after a predetermined period (5 seconds or more), you can find this code snippet in this thread: {message identifier: = 9491610}

  • DatabaseIOException:Bridge was destroyed. The process of the UMP or Java may have died when creating database sqllite

    Hi all

    I have created a sqllite database in the simulator using the thic code:

    Data d = null;
    Try
    {
    Var myURI = URI.create ("file:///store/home/user/" +)
    "MyTestDatabase.db");
    d = DatabaseFactory.create (myURI); I make exception to this line
    d.Close ();
    }
    catch (System.Exception e)
    {
    System.out.println (e.getMessage ());
    e.printStackTrace ();
    }

    but I am unable to create following the database an exception was thrown: DatabaseIOException:Bridge has been destroyed. The process of the UMP or Java may be dead. Please help me

    Oh, I see, I didn't know that that was part of the error message.

    I just run the SQLiteDemo in the OS 7.1 9900 Simulator that comes installed with the 7.1 Eclipse plug.  It works fine (once I had added the SD card of course).  If you can't run it, then I suspoect you have something very corrupted in your installation.  I re-download and reinstall Eclipse from here:

    http://developer.BlackBerry.com/BBOS/Java/download/

    First, of course, make sure that your PC meets the specifications required:

    http://developer.BlackBerry.com/BBOS/Java/download/requirements/

  • Database does not start... ALTER database open; change the database open * ERROR at line 1: ORA-03113: end of file on the channel of communication process ID: 10400 Session ID: 418 serial number: 3 -.

    HI during startup of the database of the following errors is. Please help solve the problem.

    SQL > alter database open;

    change the database open * ERROR at line 1: ORA-03113: end of file on the channel of communication process ID: 10400 Session ID: 418 serial number: 3 -.

    ============================================================

    Please see the alerts log entries

    --------------------------------------------------------------------------------------------------------------------------------

    Commissioning:

    Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production

    With the options of partitioning, OLAP, Data Mining and Real Application Testing.

    Using parameters in spfile D:\APP\ADMINISTRATOR\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILEATTNDPRD server-side. ORA

    Parameters of the system with default values:

    process = 400

    sessions = 624

    memory_target = 4G

    control_files = 'D:\ORACLE\ORADATA\ATTNDPRD\CONTROLFILE\O1_MF_8LRQYB0M_. CTL.

    control_files = 'C:\ORACLE\ORADATA\ATTNDPRD\CONTROLFILE\O1_MF_8LRQYB13_. CTL.

    DB_BLOCK_SIZE = 8192

    compatible = "11.2.0.0.0."

    log_archive_format = "ARC%S_%R.%T."

    db_create_file_dest = 'D:\oracle\oradata. '

    db_create_online_log_dest_1 = "D:\oracle\oradata".

    db_create_online_log_dest_2 = "C:\oracle\oradata".

    db_recovery_file_dest = 'C:\oracle\oradata\flash_area. '

    db_recovery_file_dest_size = 8G

    undo_tablespace = 'UNDOTBS1.

    Remote_login_passwordfile = "EXCLUSIVE."

    db_domain = «»

    dispatchers = "(PROTOCOL=TCP) (SERVICE = ATTNDPRDXDB)" "

    audit_file_dest = "D:\APP\ADMINISTRATOR\ADMIN\ATTNDPRD\ADUMP".

    AUDIT_TRAIL = 'DB '.

    db_name = "ATTNDPRD".

    open_cursors = 300

    diagnostic_dest = "D:\APP\ADMINISTRATOR".

    Sun 24 May 13:43:09 2015

    PMON started with pid = 2, OS id = 5792

    Sun 24 May 13:43:09 2015

    VKTM started with pid = 3, OS id = 6500 high priority

    VKTM clocked at (10) precision of milliseconds with DBRM quantum (100) ms

    Sun 24 May 13:43:09 2015

    GEN0 started with pid = 4, OS id = 13072

    Sun 24 May 13:43:09 2015

    DIAG started with pid = 5, OS id = 1424

    Sun 24 May 13:43:09 2015

    DBRM started with pid = 6, OS id = 8240

    Sun 24 May 13:43:09 2015

    PSP0 started with pid = 7, OS id = 2980

    Sun 24 May 13:43:09 2015

    DIA0 started with pid = 8, OS id = 12956

    Sun 24 May 13:43:09 2015

    MA started with pid = 9, OS id = 13356

    Sun 24 May 13:43:09 2015

    DBW0 started with pid = 10, OS id = 14248

    Sun 24 May 13:43:09 2015

    DBW1 started with pid = 11, OS id = 17900

    Sun 24 May 13:43:09 2015

    LGWR started with pid = 12, OS id = 5564

    Sun 24 May 13:43:09 2015

    CKPT started with pid = 13, OS id = 16736

    Sun 24 May 13:43:09 2015

    SMON started with pid = 14, OS id = 14068

    Sun 24 May 13:43:09 2015

    RECCE has started with pid = 15, OS id = 16288

    Sun 24 May 13:43:09 2015

    MMON started with pid = 16, OS id = 10884

    commissioning 1 dispatcher (s) for '(ADDRESS =(PARTIAL=YES) (PROTOCOL = TCP))' network address...

    commissioning or shared server 1...

    Environment ORACLE_BASE = D:\app\Administrator

    Sun 24 May 13:43:09 2015

    ALTER DATABASE MOUNT

    Sun 24 May 13:43:09 2015

    MMNL started with pid = 17, OS id = 16128

    Mount of redo thread 1, with mount id 3325657453

    Database mounted in exclusive Mode

    Disabled lost write protect

    Completed: ALTER DATABASE MOUNT

    Sun 24 May 13:43:23 2015

    change the database open

    Sun 24 May 13:43:23 2015

    LGWR: FROM PROCESS ARCH

    Sun 24 May 13:43:23 2015

    Arc0 started with pid = 21, OS id = 10084

    Arc0: Started archiving

    LGWR: FROM PROCESS ARCH COMPLETE

    ARC0: FROM PROCESS ARCH

    Sun 24 May 13:43:24 2015

    Arc1 started with pid = 22, OS id = 18400

    Sun 24 May 13:43:24 2015

    ARC2 started with pid = 23, OS id = 17280

    Arc1: Started archiving

    ARC2: Started archiving

    Arc1: become the "no FAL' ARCH

    Arc1: become the "no SRL" ARCH

    ARC2: Become the heartbeat ARCH

    Errors in the d:\app\administrator\diag\rdbms\attndprd\attndprd\trace\attndprd_ora_10400.trc file:

    ORA-19815: WARNING: db_recovery_file_dest_size 8589934592 bytes is 100.00% used and has 0 bytes remaining available.

    ************************************************************************

    You have choice to free up space in the recovery area:

    1 consider changing STRATEGY OF RETENTION of RMAN. If you are using Data Guard

    then consider changing POLICY of DELETE ARCHIVELOG RMAN.

    2 back up files on a tertiary device such as a tape with RMAN

    SAFEGUARDING RECOVERY AREA command.

    3. Add space drive and increase the db_recovery_file_dest_size setting to

    reflect the new space.

    4 remove the unnecessary files using the RMAN DELETE command. If a service

    the system control has been used to remove the files, and then use the RMAN DUPLICATION and

    Commands DELETE has EXPIRED.

    ************************************************************************

    Errors in the d:\app\administrator\diag\rdbms\attndprd\attndprd\trace\attndprd_ora_10400.trc file:

    ORA-19809: limit exceeded for file recovery

    ORA-19804: cannot recover disk 44571136 bytes limit 8589934592 space

    ARCH: 19809 error creating archive log file to ' C:\ORACLE\ORADATA\FLASH_AREA\ATTNDPRD\ARCHIVELOG\2015_05_24\O1_MF_1_10343_%U_. ARC'

    Errors in the d:\app\administrator\diag\rdbms\attndprd\attndprd\trace\attndprd_ora_10400.trc file:

    ORA-16038: log 2 # 10343 sequence can be archived

    ORA-19809: limit exceeded for file recovery

    ORA-00312: wire 2 1 online journal: ' D:\ORACLE\ORADATA\ATTNDPRD\ONLINELOG\O1_MF_2_8LRQYD8B_. JOURNAL"

    ORA-00312: wire 2 1 online journal: ' C:\ORACLE\ORADATA\ATTNDPRD\ONLINELOG\O1_MF_2_8LRQYDF6_. JOURNAL"

    USER (ospid: 10400): put an end to litigation because of the error 16038

    Sun 24 May 13:43:24 2015

    ARC3 started with pid = 24, OS id = 2188

    Errors in the d:\app\administrator\diag\rdbms\attndprd\attndprd\trace\attndprd_arc2_17280.trc file:

    ORA-19815: WARNING: db_recovery_file_dest_size 8589934592 bytes is 100.00% used and has 0 bytes remaining available.

    ************************************************************************

    You have choice to free up space in the recovery area:

    1 consider changing STRATEGY OF RETENTION of RMAN. If you are using Data Guard

    then consider changing POLICY of DELETE ARCHIVELOG RMAN.

    2 back up files on a tertiary device such as a tape with RMAN

    SAFEGUARDING RECOVERY AREA command.

    3. Add space drive and increase the db_recovery_file_dest_size setting to

    reflect the new space.

    4 remove the unnecessary files using the RMAN DELETE command. If a service

    the system control has been used to remove the files, and then use the RMAN DUPLICATION and

    Commands DELETE has EXPIRED.

    ************************************************************************

    Instance of stopped by USER, pid = 10400

    --------------------------------------------------------------------------------------------------------------------------

    Regarding

    Ngoyi

    Hello

    Now it works very well... with following

    --------------------------------------------------

    using sqlplus

    • Startup mount
    • ALTER database noarchivelog;
    • ALTER database open;

    -------------------------------------------------------------------------------

    Concerning

    Ngoyi

    -------------------------------------------------

  • Current version of the data in the database has changed since the user initiated the update process.

    Hello

    I get this error message: when I update a table.

    • Current version of the data in the database has changed since the user initiated the update process. version of current line identifier = "4975F66067C6EE412FF51DF46B8C4916" line application version identifier = "31163419BE48C198DE88A34AD12FE4D2".

    I get this message when a process is used to run an update on the same table.

    BEGIN

    RUN IMMEDIATELY "UPDATE CONTRATS_MAINTENANCE SET EMAIL_SENDED = 0 WHERE ID =: 1' WITH THE HELP OF: P27_ID;"

    END;

    There is an automated line (DML) process process on the table on this page.


    I understand that I get this error message because I update the database EMAIL_SENDED in the same table column and tha the automated process line try to update the same table...

    On this page, I want to the column EMAIL_SENDED always has the value 0 (zero). It is a database column

    How to do?

    Thank you for your help.

    Christian

    If you try to make it through a normal form. So in this case email_sended is getting two values one of your FORM and one of your update statement.

    To resolve this problem, you can code in hard EMAIL_SENDED point the value '0' and keep only the database column.

    If the whenver update you will automatically take the value 0.

    Another solution would be to remove this column from your FORM and then update by creating a process after submit.

    BR,

    Patrick

  • major differences of exadata database, listener, process than the normal RAC environment?

    I would ask for any input about the major differences of exadata database, listener, process than the normal RAC environment.

    I know now the exadata have not only SCAN listeners, but many other listeners. expert here can provide clarification?

    Thank you

    All the right questions... Welcome to the world of Exadata.

    These are questions that could get into a lot more detail and discussion than a forum post. At a high level, you certainly don't want delete all indexes on Exadata. However, you need to index and an indexing strategy will change on Exadata. After you move a database from one not Exadata Exadata environment you are probably more indexed. Indexes used for OLTP transactions real - looking one or a few lines among others will usually quickly with an index. The index used to avoid a percentage of records but always returning number of records can often be moved. On the index depends on the nature of the workload and your application. If you have a control on the index, then test your queries and DML with the index (s) invisible. Check your implementation plan, the columns io_cell_offload in v$ sql, smart scan wait events to ensure you get intelligent analysis... and see if the smart scan is faster than using the index (s). The real-time SQL Monitor is an excellent tool to help with this - use dbms_sqltune or grid/cloud control.

    Parallelism is a great tool to help still speed up queries and direct path loading operations and can help prompt smart scans... but use of parallelism really depends on your workload and must be controlled using DBRM and the parallel init parms, possibly using same parallel declaration put on hold, so it is not overwhelm your system and cause concurrency problems.

    If you have a mixed environment of workload or consolidates databases on Exadata so my opinion is IORM plans should certainly be implemented.

  • Using the database links in a process of page plsql

    I try to use a database link in a process of pl/sql page.

    It works fine when I use the link name in the plsql like this:

    BEGIN 
    SELECT  
    CARD_ID 
    into   
     :P12_HDR_CARD_NUMBER  
    from     [email protected]; 
    

    But the name of the link will come an element of page (P12_DBLINK) filled as follows:

    select db_link d, db_link r from user_db_links;  
    

    I tried the following and it doesn't work:

    DECLARE 
    l_link VARCHAR2(30); 
    BEGIN 
    l_link :=  :P12_DBLINK; 
    SELECT  
    CARD_ID 
    into   
     :P12_HDR_CARD_NUMBER  
    from     fusion.EXM_CC_COMPANY_ACCOUNTS@l_link; 
    

    It gives me:

    • ORA-04052: error occurred when searching to the top of the remote object MERGER. EXM_CC_COMPANY_ACCOUNTS@L_LINK. WORLD ORA-00604: error occurred at recursive SQL level 3 ORA-02019: description of the connection to the remote database not found

    I thought that perhaps the name of the link was joined with. WORLD automatically but already has the name of the link. WORLD at the end I tried stripping first but the error is the same.

    Is there some synatx for what will work? Can I use dynamic sql statements?

    Any suggestions are most appreciated.

    Thank you

    John

    Hi John,.

    see the following example to use the execute immediate for your purpose

    SQL - Variable for the name of the database link - Stack Overflow

    Let me know if that answers your query in the active thread

  • Current version of the data in the database has changed since the user has launched the process of update: tabular

    Hi people

    Version of the apex is 4.2

    Oracle database: 11.2.0.2

    I have created a simple form in a table on a table with the following structure

    {code}

    Name                                                  Null?    Type

    ----------------------------------------------------- -------- --------------

    TID NOT NULL NUMBER

    TEAM_NAME VARCHAR2 (30)

    EMP_NAME VARCHAR2 (30)

    DATE OF REPORT_DATE

    CATEGORY_NAME VARCHAR2 (30)

    NUMBER OF CATEGORY_HR

    CATEGORY_COMMENT VARCHAR2 (120)

    CREATED_DT                                                     DATE

    CREATED_BY VARCHAR2 (30)

    STATUS VARCHAR2 (1)

    {code}

    In the form of default table is

    {code}

    Select

    TID,

    TID TID_DISPLAY,

    TEAM_NAME,

    EMP_NAME,

    REPORT_DATE,

    CATEGORY_NAME,

    CATEGORY_HR,

    CATEGORY_COMMENT,

    CREATED_DT,

    CREATED_BY

    of ' #OWNER # '. " TIMESHEET.

    {code}

    I wanted to add an additional line inserted in the tabular presentation, that's why I included 'UNION ALL' in the query like this

    {code}

    Union of all the

    Select

    TID, null,

    tid_display null,

    team_name null,

    emp_name null,

    report_date null,

    category_name null,

    category_hr null,

    category_comment null,

    created_dt null,

    created_by null

    of the double

    {code}

    After having done that, I can see a blank line on the form of tables, but each time I fill data in the fields and press the "SUBMIT" button, I get the below error

    Current version of the data in the database has changed since the user has launched the process of update

    Thank you

    Navneet

    Why are you insert an extra line manually in tabular from?  There is a feature to add a line.

    If you want to insert a blank line when the page loads, then create dynamic action, this is a most appropriate feature.

    If you manually insert an extra line so that you have to write the update for this procedure.

    Leave.

  • Reading file and dump the data into the database using BPEL process

    I have to read the CSV file and insert data into the database... To do this, I created some asynchronous bpel process. Adapter filed added and associated with the receive activity... Adapter DB has added and associated with the Invoke activity. Receive two total activity are available in the process when trying to Test em, receive only the first activity is complete and awaits the second receive activity. Please suggest how to proceed with...

    Thanks, Maury.

    Hi Maury,

    There is no need in step 2 that u mentioned above. I donot find useless a webservice?

    The process will be launched by the CSV file, then using the processing activity, you can put it in the DB.

    There should be no way where you can manually test it by giving an entry. All you can do to test is to put the file in the folder you mentioned when configuring the file adapter.

    You just need to have the composite as below:

    ReadCSVFile---> BPEL--> DB adapter

    And in your BPEL process:

    Recieve--> Transformation activity--> call activity

    Try to work on some samples listed on the oracle site and go through the below URL:

    The playback of the file adapter feature using

    Thank you

    Deepak.

  • The consecutive processes share the same database session?

    Assuming that we have two processes on the same page, which are performed one after the other. Can we depend on that they are running within the same database session?

    For example it is possible to define a database (sys_context) context in the first process, and refer to this context within a second (e.g. process DML).

    Or asking in other words: is the session DB from the pool of connections apex reused or returned to the connection pool after the first process?

    It's a little difficult to control, because only the negative case would be suffient to say it doesn't work that way. Just because the 10 tests on 10 worked will not guarantee that it is always the case.

    Published by: Sven w. on June 28, 2012 15:04 - typo correction

    Sven wrote:
    Assuming that we have two processes on the same page, which are performed one after the other. Can we depend on that they are running within the same database session?

    For example it is possible to define a database (sys_context) context in the first process, and refer to this context within a second (e.g. process DML).

    Or asking in other words: is the session DB from the pool of connections apex reused or returned to the connection pool after the first process?

    The same session of database is used for the duration of 'see the page' or 'accept the page' request. If a 'direct branch' (direction of the page without using the redirect) takes place after "accept page" the same session of treatment, is used to display the new page. See the use attributes initialization PL/SQL/cleaning of Code PL/SQL Code .

    However, several operations may be involved, as APEX will issue engages on different session state changes.

  • Schema of database for the processing of the results

    Hi all

    I have a question on the results of teststep registration in the database. I use the generic recordset as my database for the recording of data.

    In my database (SQL Server), there are the following diagrams: dbo, dbtest1, dbtest2. I created all the tables of results (UUT_Result, Step_Result etc...) of the generic Recordset object in all schemas. Therefore, to display the results in the table of Step_Result, I specify the schema as well as the name of the table, for example SELECT * FROM dbtest1. Step_Result.

    Is there a way to tell Teststand to save the result in another schema (dbo, dbtest1, dbtest2) programmatically?

    Yours sincerely,

    chati

    Yes, the tool is easier to visualize what makes SQL. A user can access patterns as much as they have permission for. The problem is that, in order to specify the schema, you must assign the rating on your SQL commands. I don't know of an easy way to do this in Teststand.

    If you manually create the SQL commands is noted:

    [Database]. [Schema]. [Table]

    It is easier to implement multiple user names and to use the default schema to define.

    You can also have a database that is different for each category of journaling (and no need to use patterns at all).

    A lot of good information here:

    https://TechNet.Microsoft.com/en-us/library/dd283095 (v = sql.100) .aspx

  • process of the deceased

    10.2.0.2 Ent Ed, aix 5.3

    We recently had a problem where a script on a remote server has been blocked in a loop and creating about 4 connections per second in our production database. A new instance of the script said was being made each day on top of that. Needless to say that performance started going down the tubes.

    I kept monitoring the server and the database and nothing stood out at first, but I kept seeing sqlplus defunct processes appear on the database server and the application server, but they would further their own pretty quickly. Who lead us to discover the bug in the script.

    I still see a process random defunct sqlplus on her come and go from the database server. This is not the same PID and PPID every time. I put the admin sys on what creates it and asked myself if this is not the normal procedure for a session of sqlplus finish going defunct and then disappear. While I'm about 99.9999% sure that this is not normal, I tried to find the documentation in the concepts guide to say to prove and could not. Could someone point me to some of the documents on this?

    Not sure what you mean by documentation because the deceased is pure unix thing. Instead of sqlplus could be any other process.
    Deceased process (zombie process) are processes that have been corrupted in such a way that no long can communicate with their parent process or the child. So kill the parent or child, and the process of the deceased will disappear.

    I don't know about orders of aix, but on solaris, you can use
    Ptree
    and on linux, you can use
    ps - ef - forest
    for surveys that are the parent process of the deceased process.

    In your case sqlplus can become zombie so it finnished it is working, but is still depenent on scipt that began. And by reason, the process taking no care on its child process of the parent or by sending signals from parent to child, there is some delay.
    Start by searching on what these processes are dependent, then look in the logs of the aix system...

  • Cannot connect - error loading the database...

    Hello

    Since last night find it me impossible to log in on my Skype.

    He puts me "error loading the database. Skype." It is possible that another instance of Skype to use it.

    I have reinstalled Skype, restart my pc, close all programs that could use it.

    My husband get a leverage son skpe without pb so it comes well d a pb with my nickname... going to Word of the change back I. Despite all this, I still have the same message that appears! 1

    did anyone have the same pb? and how he made it?

    Please pour your help!

    Rrbonjour

    so I did well the manip: Task Manager, process shutdown Skype.

    then relaunch the application, but it gives me the same result...

    I made a new system more earlier but it's restore gives me the same result...

    I asked a third person open my account on another pc and ca works... so it just my pc, not a hacking, (already it reassures me!)

    My spouse does so well to open Skype with his nickname but I still nothing.

    Application rattachee a Skype name can she used Lun?

  • Manually change the database of Photos?

    Is it possible to manually edit the database of Photos?

    I was diagnosing a problem that surfaced after initially migrating my library, iPhoto photo some time ago.  It seems that a large number of photos in the iPhoto library had wrong file paths.  The pictures themselves were there, but iPhoto had an error in the paths of files.  (This happened probably years of updates, transfers to different machines, etc.  It is a very old and very large library).

    I used a third-party tool 3 to begin to correct the problem in the iPhoto library, and so far, that seems to be promising.  This approach however, kill my library of Photos and re-import to iPhoto.  Which seems a bit risky for me, maybe lose pictures added since the migration (which is not bad), since I'm not really sure how interaction with iCloud will manage this attempt.

    But it presents to me unless I am able to inspect and change some of the data that the readers of the library of Photos directly, so maybe I can apply a fix directly without having to re - migrate.  It seems that after the XML files for libraries, but there may be another approach, that I could use to directly modify the photo library?  All I really need to do is change some paths that point to pictures.

    Here's a post by user Pascal MaH describing how he managed to modify an iPhoto database.  Maybe you can use the technique to alter a database of Photos.  As usual, don't forget to have a backup current library before you try:

    Pascal Mah

    Re: iPhoto 11 referenced library problems

    July 27, 2011 16:28 (in response to Terence Devlin)

    YES!

    I finally managed to get my iPhoto library!

    But it has not been easy. I had to hack into the database file that you want to put in place good things.

    After much trial and error, here is the procedure I came to, that finally worked for me (use at your own risk):

    0 make sure you do enough safeguards so as to revert to the previous state if something goes wrong!

    1 make a copy of your photo library in iPhoto [Show Package Contents]/Database/apdb/Library.apdb on your desktop.

    This file contains most of the necessary data for the management of your data in iPhoto.

    2. open this file using a SQLite database manager.

    I've used Navicat 9.1, which has features nice import-export. For direct editing, basic 2.0 is may be easier.

    3. open the RKMaster table.

    This table contains all the records for the individual photos in your library.

    4 fix the path of each of your photo files in the imagePath for their course filepath column.

    It contains the path of your photo files at the time they were imported and is not updated by iPhoto, even if you have moved your files photo somewhere else.

    If you have thousands of ways to correct, a good idea is to export this column in a text (including column modelId for SEO) file and to correct the paths using the substring function search & replace in your favorite text editor. Do not forget to re - import data properly corrected using column modelId as references.

    5. If necessary, correct, in the same way, the content of the column fileVolumeUuid to the value of the drive that currently contains your picfiles.

    If so, get this value given a PIC that was recently imported from this player.

    6. If your drive name has changed, also to correct its name in the name column of the RKVolume table.

    Identify the appropriate record taking into account its uuid obtained previously.

    7. If you are satisfied with your work, leave the database, program management and restore Library.apdb to its original inside your iPhoto library location.

    Keep the old somewhere in case something goes wrong.

    8 run iPhoto to see if your work is successful!

    At this point, you might consider the construction of the library (hold down alt - cmd at the launch of iPhoto) and choose repair the iPhoto library database (make sure to let him rebuild the AutoSave iPhoto library database disabled!). This could correct some possible unconsistencies resulting from your changes. Also, a good thing would be to rebuild all thumbnails. If iPhoto don't you do not bug to locate files during this process, you may have done your work right! If this is not the case, go back to step 1.

    As said, it worked for me, without visible inconsistencies or side effects to be noticed in the behaviour of iPhoto (at least for now). But perhaps some knowledgable people could comment on and improve this process, and a database script guru can also help automate this. Please comment.

    Lessons learned (how I understand things as far as I know):

    A. Library.apdb stores the original drive and the path to the picture at the time files wherever they were imported. It is not changed because the files are moved.

    B. some other data (BLOB Binary?) are used to track the files to their actual location. Therefore, it seems OK to move photo files once they have been imported.

    C. Unfortunately, these other data are broken, if the file is re-created (even with the same content and location), for example with one based on files backup and restore (Time Machine).

    (D) in this case, iPhoto is unable to repair the file if its drive and path does not match the one when the file is imported, stored in the database. Also, there is no mechanism in iPhoto to correct these data.

    (E) therefore, it is very important to import photo files in iPhoto only when they are already in their final location! Otherwise, your iPhoto library won't survive a Time Machine backup and restore! (A disk block-based backup might work... I don't know).

    F. ... And Apple should really, really solve this problem! (Correcting the filepath to the current location of the file and stored at least when the database is repaired, and by offering at least some basic file options reconnection).

    I have not tried so I can't confirm or deny its effectiveness.

Maybe you are looking for

  • Pop - Up Message - automatic update of firefox has not

    HelloI have a Mac, Yosemite 10.10.4 running version. I was doing a pop-up message on Firefox indicating that my automatic update has failed. Today, I had to manually update to Firefox 39.0. I am also running Avast Anti-virus for Mac. Why are automati

  • El Capitan sparse disk image

    Help my sparse disk image disappeared after upgrade to El Capitan, no matter who any idea how to get it back. Can't find it with the new utility disk or elsewhere for that matter! :-(

  • Portege R500 - How do we install the new display driver?

    Hello I tried to download and install a new driver to display for my R500, but for some reason, it wouldn't let me install it.The system is locked?Is it possible to 'hack' the system so I can install new drivers?

  • How can I create a server to install the OS via ethernet

    I work in a repair shop, and I want to make it easier to wipe/reloads on a system. I want to make one. ISO image of an operating system (i.e. Win XP Home) upload to the server, and then use the server (via the ethernet connection) to install the oper

  • With VCS-E media stream, VCS - C

    Hello We intend to implement a VCS infrastructure now I'm not sure wich accepting media between endpoints. On the photo, you could see the scenario. Endpoint 1 is communicate with the endpoint 2. 1 medium Wich takes the media stream? 2 what endpoint