sqlnet.log query

Hi Experts,

Whenever a network issues or something happens to make the connection between the client and the server Oracle has lost.

Can that information store in the customer sqlnet.log file or need to configure some settings so that it will write in sqlnet.log.

Why I ask is we have a questions today, I received an error


ORA-12571 - TNS: packet writer failure

in one of our process log files. So want to know what a network or any other problem issues?

1009072 wrote:

Hi Sybrand,

Thanks for your quick response. I have it here is one of the concerns

Errors, originating at the customer level, are automatically written to sqlnet.log

I don't see any in the client location ORACLE_HOME/network/admin sqlnet.log file. On the client side, this means that everything was ok when the problem occurred?

SQLNET.log are not always written in ORACLE_HOME/network/anything whatsoever.  I saw them appear in the 'current' directory of the client process that created the error.

Tags: Database

Similar Questions

  • SQLNET.log tmp in 30 __instancemodificationevent where targetinstance isa

    Hi all

    We observed under Win2003 Server alert.

    Message:

    ./root/CIMV2select bin core db doc dsa etc Setup lib lib3p journal policy

    Sqlnet.log README RELEASE_ID __instancemodificationevent tmp in

    30 where targetinstance isa Win32_PerfFormattedData_PerfDisk_LogicalDisk

    and targetinstance. PercentFreeSpace< 1="" and="" targetinstance.name="" !="">

    _Total0x8004106c

    What it means & does it affect our server? If yes then how to solve the problem.

    Kind regards

    Hello

    Your question is more complex than what is generally answered in the Microsoft Answers forums. It is better suited for the IT Pro TechNet public. Please post your question in the TechNet forum.

    http://social.technet.Microsoft.com/forums/en-us/category/WindowsServer

  • SQLNET.log generated in the directory root

    Hello

    I use the oracle 9.2.0.8 version
    SQLNET.log file is generated in the directory root on my dbserver and it is consuming a lot of space in my root directory.

    can I delete sqlnet.log under the root directory?
    It will do no harm to DB?

    all trace levels are turned listener.ora as long sqlnet.ora
    regarding I know the log file should be generated in $ORACLE_HOME/network/admin.

    why it generates the log file in the directory root and how get rid of?


    Thanks in advance

    Yes, this file can be deleted safely.
    But look at this file to find errors, why they are so many and are trying to resolve the first cause.

    file is under the root because it is the directory at the time of the error.
    Check who is the owner of the file and verify that this user made, and why not be able to function normally.

    But if you want to get rid of this file, you can then follow metalink Note 162675.1

    Published by: Laura Gaigala February 3, 2009 13:03

  • Standby redo log query

    Hello team,

    I have a query:

    When I add or delete redo Eve journal/primary group is that ensure redo log group corresponding is added / removed pending?

    Kind regards

    As already answer, this change will be automatically in standby mode.

    This section of the documentation discusses steps to add redo log files to a database of pending. I find that to be a little lack of detail, so you can also visit Metalink Note 740675.1 online Redo Logs in physical availability.

    HTH,

    Brian

  • For more newspapers in sqlnet.log

    Hello
    in 10.2.0 on Win 2003

    What should I add to sqlnet.ora to have more information or tracing, or logging level for failed connections?
    Thank you.

    user522961 wrote:
    Hello
    in 10.2.0 on Win 2003

    What should I add to sqlnet.ora to have more information or tracing, or logging level for failed connections?
    Thank you.

    Have you looked at the Net Services reference? When you have done (you fact check the manual, right?) the names of any of the parameters has been suggested they could be likely candidates - names with things like "TRACE" in the name?

    http://download.Oracle.com/docs/CD/B19306_01/network.102/b14213/TOC.htm and look in the section on the sqlnet.ora.

  • Listener log problem

    Hello

    As the listener.log file grows, the DBA will want to remove or rename this log file.

    Dear user1175505,

    You can either save the listener.log old and fat, or you can simply remove everything! :)

    However, if you change the name of it and create a new listener.log file, you need to show it to the Oracle if not that Oracle will start in this new log file not writing.

    Here's an illustration for you;

    $ lsnrctl show log_file;
    
    LSNRCTL for HPUX: Version 10.2.0.4.0 - Production on 16-AUG-2010 15:39:28
    
    Copyright (c) 1991, 2007, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=optdb)(PORT=1521)))
    LISTENER parameter "log_file" set to listener.log
    The command completed successfully
    
    $ lsnrctl show log_file;
    
    LSNRCTL for HPUX: Version 10.2.0.4.0 - Production on 16-AUG-2010 15:39:28
    
    Copyright (c) 1991, 2007, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=optdb)(PORT=1521)))
    LISTENER parameter "log_file" set to listener.log
    The command completed successfully
    $ lsnrctl show log_directory;
    
    LSNRCTL for HPUX: Version 10.2.0.4.0 - Production on 16-AUG-2010 15:39:52
    
    Copyright (c) 1991, 2007, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=optdb)(PORT=1521)))
    LISTENER parameter "log_directory" set to /opt/oracle/product/10.2.0/db_1/network/log/
    The command completed successfully
    
    --> Listener.log file name is listener.log and the directory is /opt/oracle/product/10.2.0/db_1/network/log/
    --> Now lets simply delete the listener.log file.
    
    $ rm /opt/oracle/product/10.2.0/db_1/network/log/listener.log
    
    --> Create a new listener.log file.
    
    $ cat /opt/oracle/product/10.2.0/db_1/network/log/listener.log
    $ --> So it is empty.
    
    $ sqlplus sys/password@opttest as sysdba
    
    SQL*Plus: Release 10.2.0.4.0 - Production on Mon Aug 16 15:41:23 2010
    
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $ cat /opt/oracle/product/10.2.0/db_1/network/log/listener.log
    $ --> STILL EMPTY!!
    
    --> So now we need to set the logfile;
    
    $ lsnrctl set log_file /opt/oracle/product/10.2.0/db_1/network/log/listener.log
    
    LSNRCTL for HPUX: Version 10.2.0.4.0 - Production on 16-AUG-2010 15:41:52
    
    Copyright (c) 1991, 2007, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=optdb)(PORT=1521)))
    LISTENER parameter "log_file" set to /opt/oracle/product/10.2.0/db_1/network/log/listener.log
    The command completed successfully
    
    LSNRCTL> set log_status off;
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=optdb)(PORT=1521)))
    LISTENER parameter "log_status" set to OFF
    The command completed successfully
    LSNRCTL> set log_status on;
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=optdb)(PORT=1521)))
    LISTENER parameter "log_status" set to ON
    The command completed successfully
    LSNRCTL> exit
    $ ls -lrt
    total 89736
    -rw-r-----   1 oracle     oinstall      1480 Mar  5  2009 opttest.log
    -rw-r-----   1 oracle     oinstall   45932608 May 28 00:34 sqlnet.log
    -rw-r-----   1 oracle     oinstall        38 Aug 16 15:44 listener.log
    $ cat listener.log
    16-AUG-2010 15:44:31 * log_status * 0
    
    $ sqlplus aircom/password@opttest
    
    SQL*Plus: Release 10.2.0.4.0 - Production on Mon Aug 16 15:44:52 2010
    
    Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
    
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $ ls -lrt
    total 89736
    -rw-r-----   1 oracle     oinstall      1480 Mar  5  2009 opttest.log
    -rw-r-----   1 oracle     oinstall   45932608 May 28 00:34 sqlnet.log
    -rw-r-----   1 oracle     oinstall       255 Aug 16 15:44 listener.log
    $ cat listener.log
    16-AUG-2010 15:44:31 * log_status * 0
    16-AUG-2010 15:44:52 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=opttest)(CID=(PROGRAM=sqlplus@optdb)(HOST=optdb)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.6.105.131)(PORT=49787)) * establish * opttest * 0
    $
    

    Hope that helps.

    Ogan

  • Problem with "Query" parameter data pump

    Hello guys,.

    My information in the table are less than
    Name of the schema: CT_COM
    Table name: order_session
    The current size of the table = ~ 75
    Total number of rows in the table: 3847904-> select count (*) from CT_COM.order_session;
    Total number of lines with condition: 734042 (orders over 1 January 08)-> select count (*) from CT_COM.order_session where updated_date < = 31 December 07 ';


    Now, I'm taking an export in which I want to capture all the orders that are placed before 1 January 08. Here's the query I use on a DB of RHEL server. I 10.2.0.3 db on this server.

    expdp "" / as sysdba "" logfile = FULL_TABLE:order_session_upto_dec_31_07.log query = CT_COM.order_session dumpfile=FULL_TABLE:order_session_upto_dec_31_07_%U.dmp:------"" where updated_date------<------= \'31-DEC-07\'\ ' filesize = 2 G job_name = order_session_upto_dec_31_07 "

    Export: Release 10.2.0.3.0 - Production on Wednesday, August 4, 2010 16:11

    Copyright (c) 2003, 2005, Oracle. All rights reserved.

    Connected to: Oracle Database 10g Release 10.2.0.3.0 - Production
    Departure 'SYS '. "' ORDER_SESSION_UPTO_DEC_31_07 ': ' / * AS SYSDBA' dumpfile=FULL_TABLE:order_session_upto_dec_31_07_%U.dmp logfile = FULL_TABLE:order_session_upto_dec_31_07.log query = CT_COM.order_session:" "where updated_date < = 31 December 07 ' ' filesize = 2 G job_name = order_session_upto_dec_31_07"
    Current estimation using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Table main 'SYS '. "' ORDER_SESSION_UPTO_DEC_31_07 ' properly load/unloaded
    ******************************************************************************
    Empty the files together for SYS. ORDER_SESSION_UPTO_DEC_31_07 is:
    /Backup/export/tables/order_session_upto_dec_31_07_01.dmp
    Job 'SYS '. "" ORDER_SESSION_UPTO_DEC_31_07 "carried out at 16:11:31

    Query runs fine without error. However if you look at the output above it exports all lines. I keep seeing Total estimation using BLOCKS method: 0 KB. -What did I do wrong?

    I even tried the same scenerio with table scott.emp and I see the same behavior. Did I miss something in my where clause or is this a bug?

    Expdp $ Linux-223: (bonus) "" / as sysdba "" dumpfile=full_table:a%U.dmp logfile = FULL_TABLE:a.log query = scott.emp:------"' where sal------> 2000------' filesize = 2 G job_name = b.job"

    Export: Release 10.2.0.4.0 - Production on Wednesday, August 4, 2010 15:11:31

    Copyright (c) 2003, 2007, Oracle. All rights reserved.

    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With partitioning, OLAP, Data Mining and Real Application Testing options
    Departure 'SYS '. "' B ': ' / * AS SYSDBA' dumpfile=full_table:a%U.dmp logfile = FULL_TABLE:a.log query = scott.emp:" where sal > 2000 "filesize = 2 G job_name = b.job
    Current estimation using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Table main 'SYS '. "' B ' properly load/unloaded
    ******************************************************************************
    Empty the files together for SYS. B is:
    /tmp/A01.dmp
    Job 'SYS '. "' B ' managed to 15:12:46


    Any help will be appreciated.


    -MM

    Published by: UserMM on August 4, 2010 14:48

    It has nothing to do with your request, it has to do with the rest of your order expdp. You specify a schema on the table, or you specified an export scheme so default datapump is the schema to run the task. As you export the work as "sys" it will only export objects belonged to sys. Since datapump does not export the sys objects, nothing get exported. You want to add

    schemas = CT_COM

    or

    tables = CT_COM.order_session

    Thank you

    Dean

  • Export DataPump with the query option

    Hi all

    My environment is IBM AIX, Oracle 10.2.0.4.0 database.

    I need a few sets of records using a query in export production. Request is attached to several tables. Since we have the BLOB data type, we export using datapump.

    We have weaker environments, but have not the same set of data and tables, and therefore not able to simulate the same query in lower environment. But created a small table and faked the query.

    My order is

    expdp system / < pwd > @orcl tables = dump.dump1 query = dump.dump1:' ' where num < 3 ' ' directory = DATA_PUMP_DIR dumpfile = exp_dp.dmp logfile = exp_dp.log

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.
    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?
    (3) given that I run with the use of the system, apart from 2 rows, export all data. We must send the dump file to the other Department. We should not export all of the data other than the query output.
    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    Your answers will be more useful.

    The short answer is 'YES', he did the right thing.

    The long answer is:

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.

    Yes. As long as you query is correct. DataPump will export on the lines that match this query.

    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?

    Estimate is made using the full picture. Since you specify, he used the method of estimation of block. Basically, how many blocks have been attributed to this table. In your case, I guess it was 80KB.

    (3) given that I run with the use of the system, apart from 2 rows, export all data. We need to send the dump file to other > Department. We should not export all of the data other than the query output.

    I will export all the data, but going to export metadata. It exports the table definition, all indexes on it, all the statistics on tables or indexes, etc. This is why the dump file could be bigger. There is also a 'main' table that describes the export job who gets exproted. This is used by export and import to find what is in the dumpfile, and where in the dumpfile these things are. It is not user data. This table needs to be exported and will take place in the dumpfile.

    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    If you only want this table, then you order export is right. If you want to export more, then you need to change your export command. From what you say, it seems that you order is correct.

    If you do not want any expoirted metadata, you can add:

    content = data_only

    at the command line. This will only export the data and when the dumpfile is imported, it must have the table already created.

    Dean

  • How to recover the database when archive some of the log file are deleted.

    I am facing a problem with the Oracle database, which is linked to archivelogs.
    Our development database is running in archivelog mode, but we don't have safeguards and have no recovery catalog.
    When the database was in running, disk got full, so some archivelogs were deleted manually.
    After that they have restarted the DB, and DB now not come. The errors are as follows:

    -------------------------------------------------------------------------------------------------------------


    SQL > startup
    ORACLE instance started.

    Total System Global Area 1444383504 bytes
    Bytes of size 731920 fixed
    486539264 variable size bytes
    956301312 of database buffers bytes
    Redo buffers 811008 bytes
    Mounted database.
    ORA-01589: must use RESETLOGS or NORESETLOGS option of database open


    SQL > alter database open resetlogs;
    ALTER database open resetlogs
    *
    ERROR on line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: ' / export/home/oracle/dev/ADVFRW/ADVFRW.system'


    SQL > recover datafile ' / export/home/oracle/dev/ADVFRW/ADVFRW.system'
    ORA-00283: cool cancelled due to errors
    ORA-01610: recovery using BACKUP CONTROLFILE option must be


    SQL > restore database using backup controlfile;
    ORA-00279: change 215548705 generated at 2008-09-02 17:06:10 needed for thread
    1
    ORA-00289: suggestion:
    /export/home/Oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.arc
    ORA-00280: change 215548705 thread 1 is in sequence #1107


    Specify the log: {< RET > = suggested |} Filename | AUTO | CANCEL}
    /export/home/Oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.arc
    ORA-00308: cannot open archived log
    ' / export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC'
    ORA-27037: unable to get file status
    SVR4 error: 2: no such file or directory
    Additional information: 3


    Specify the log: {< RET > = suggested |} Filename | AUTO | CANCEL}
    Cancel
    Cancelled media recovery.
    SQL >

    -------------------------------------------------------------------------------------------------------------


    1 how to recover the database and put online

    Any help will be much appreciated.

    Regarding
    Hemant Joshi

    Published by: hem_Kec on 7 Sep 2008 09:07

    Hello

    It is actually quite easy.
    We knew that Oracle was looking for logsequence 1107. There is no archived log, it was deleted, or it has never generated. As you delete files, the 1st solution would be possible, BUT as there was only an instance crash so more old archived logs is not required for crash recovery. There is a need during the restoration of the old backup files, which is not our case.

    Thus, the missing log sequence has not been archived = > it is a redo log.
    To identify the required online redo log, we just to log query v$ as I suggest.

    Once the log identified file, we can recover it by specifying the redo log when you ask logsequence 1107.

    And it is on. We win! :)

  • configuring_oracle_net

    Hi all

    I have a query of oracle net for replication configuration...

    I have 2 databases on 2 different computers.

    name of the Master database with the net service name is orcl and ip = 192.168.1.10

    customer (the materialized view site) is the name of database and network hbfm and ip = 192.168.1.11 = service name

    now for the success of replication, the client's tnsnames.ora file do the following changes:
    1. the host 192.168.1.10 =
    2 SERVICE_NAME = orcl

    Am I wrong?

    I use 10.2.0.1.0 oracle
    concerning

    each tnsnames.ora file should have a network service name and therefore access to the participation of other databases

    Which is exactly the same thing that what I said then, "each database must have entered his $ORACLE_HOME/network/admin/tnsnames.ora tns for other databases"

    Scroll through the thread and notice that you had posted tnsnames.ora entries that point to exactly the same combination of HOST + PORT + SERVICE_NAME. Thus, the two entries are linking to that same base.
    Now, you have posted entries that show that the two databases on two servers different 192.168.1.10 and 192.168.1.11

    Please do not use the terms "database of the customer" or "client site. A client is the software - SQL * Plus, T.O.A.D., PLSQL developer, Developer SQL, Oracle Enterprise Manager Console etc. - that you use to connect to the database and the issue of administrative orders using the language SQL, PLSQL and APIs procedures.

    Oracle database console is a client to the databases.
    hbfm and hbfp are identifiers for the two databases, who both have the same name of "orcl", on two different servers.
    You will later have problems with the installation of multimaster, because the names of two databases must be different and the GLOBAL_NAMES parameter must be set to TRUE so that Oracle can identify them as two different databases. In the real world, two different databases have two different global names.

    ORA-12170 research in documentation of Error Messages.
    12170 error is:

    ORA-12170: TNS:Connect timeout occurred
    Cause: The server shut down because connection establishment or communication with a client failed to complete within the allotted time interval. This may be a result of network or system delays; or this may indicate that a malicious client is trying to cause a Denial of Service attack on the server.
    Action: If the error occurred because of a slow network or system, reconfigure one or all of the parameters SQLNET.INBOUND_CONNECT_TIMEOUT, SQLNET.SEND_TIMEOUT, SQLNET.RECV_TIMEOUT in sqlnet.ora to larger values. If a malicious client is suspected, use the address in sqlnet.log to identify the source and restrict access. Note that logged addresses may not be reliable as they can be forged (e.g. in TCP/IP).
    

    Your OEMConsole is not able to connect to this IP address + combination PortNumber. You have an active firewall? Is the VM database, a server that is not configured to allow connections from a client that has a different IP address?

    As to your question about replication of clues... what EXACTLY are you trying to do? You started off saying that you are "replication via the materialized views. Where are the clues in the image? A materialized view is a local representation data retrieved through a query. (Or search the documentation for a formal definition, better it is of a MV). You generate clues on a MV if necessary.
    The MV is different from the source table. Source one or more tables may have clues. The MV may have clues. There is no "replication" to do.

    If you speak of MultiMaster replication, Tables and indexes are replicated as objects via some DDL statements.

    I suggest that you understand:
    1. that the servers, data bases are on.
    2. If the SQLNet connectivity is enabled.
    3. the question whether the two databases have different names.
    4. the question of if you are replicating tables or create materialized views.

    If you configure a production environment in the near future, do not use 10.2.0.1. Use 11.2.0.2 or 10.2.0.5
    Consider learning Oracle Streams or Golden Gate as long as other options.

    Getting back to the basics. I don't know what 'book' you follow. How about you read the documentation?

    It is 10.2 on Advanced Replication documentation:
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14226/TOC.htm
    It's literature 11.2 on Advanced Replication:
    http://download.Oracle.com/docs/CD/E11882_01/server.112/e10706/TOC.htm

    (You can also find documentation on water courses in Oracle database documentation set from the 'List of books' link. Golden Gate is a separate product).

    Hemant K Collette
    http://hemantoracledba.blogspot.com

  • teststand to the remote database connection

    I have a test station in teststand running factory.  I want to log/query a database located on a network server in my installation by ethernet.  Can TestStand using its types of step address/access database this database for viewing and querying records?

    TestStand features of database, including the types of step of database using Microsoft ADO/OLE DB layer and providers installed on a system. Microsoft has an ODBC provider, TestStand can talk to any remote database system that supports the technology of this software. Take a look at the help topic "TestStand Database Fundamentals" in the online help for more information on this. Suggestion of love to look at the example is right on.

    Step TestStand database types are a way to access a database at a high level, but if you want to process the data in a more complex way, you can consider your connectivity database and processing in a code module using the programming language of your choice and call this code of TestStand module.

  • ORA-03135: connection lost contact for a single user only

    Hi gurus,

    I am puzzled to this problem, no one else is having problem connecting to the database except a user who always gets "ORA-03135: connection lost contact" when you try to connect to the test and Prod environment.

    Oracle@PATEST > sqlplus user@testDB

    SQL * more: release 10.2.0.3.0 - Production Wed Dec 23 13:21:08 2015

    Copyright (c) 1982, 2006, Oracle.  All rights reserved.

    Enter the password:

    ERROR:

    ORA-03135: connection lost contact

    Get the same error when you try to connect via EDIVIEW (application) or the prompt of commands or cygwin terminal directly.

    tnsping works fine and return connect credentials

    Here's what I did troubleshooting:

    1. make sure that the user is not locked in the database

    2. place the database restricted and provided this user mode access restricted and tried the connection as long as that user, not luck.

    3A bounced the listener

    4A rebounded from the database

    5. some online forums suggest to increase the timeout setting in sqlnet.ora, in our environment, we use the OID and the sqlnet.ora is the central directory. If this cannot be the issue that everyone can connect.

    6. I checked the sqlnet.log file and there seems to be no incoming connection from this user or machine.

    Information system and database

    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64 bit Production

    Mna3dbts SunOS 5.10 Generic_150400-27 sun4u sparc SUNW, SPARC-Enterprise

    Any advice would be appreciated

    Hello world

    Thanks for everyone came, surprisingly, the problem is solved by simply changing the user password in the database. As soon as the password is reset, I was able to connect using the assigned user name.

  • Journal Alerts in mode standby

    Hi all

    11 GR 2

    Rhel6.5

    I always see these message in our database pending:


    Large information Pages *.

    By process (soft) limit system memlock = 64 k

    Total shared global area in large Pages = 0 KB (0%)

    Large Pages used by this instance: 0 (0 KB)

    Large wide Pages unused system = 0 (0 KB)

    Large Pages configured wide system = 0 (0 KB)

    The Page Size = 2048 KB

    RECOMMENDATION:

    System global area size is 2050 MB. For optimal performance,

    before the next restart of the instance:

    1 increase the number of unused by large pages

    at least 1025 (size 2048 KB, total pages 2050 MB size) wide to system

    Get 100% of the global area of system allocated by using large pages

    2 large pages are automatically locked in physical memory.

    Increase the limit memlock (soft) method at least 2058 MB to lock by

    Large pages 100% overall system of the region in physical memory

    =================================================

    Are the warnings above? or are there settings that must be changed?

    Thank you very much

    JC

    Statistically, some connections fail and frequency will depend on your system how busy is.

    Set SQLNET. INBOUND_CONNECT_TIMEOUT to a value greater than 60

    My system:

    SQLNET. INBOUND_CONNECT_TIMEOUT = 120

    You can also try adding to your listener.ora file

    DIAG_ADR_ENABLED_listener_name = OFF

    In this way they need only connect to sqlnet.log

    Best regards

    mseberg

  • ORA-28759: inability to open the file

    What I have: Red Hat 6 Server, remote Oracle database with the configuration of the TCPS connection, installed client instant oracle (basic, odbc, sqlplus) of RPM.

    I'm trying to implement the instant oracle on linux client to connect to a remote database (mb for windows).
    When you enter the command:


    /usr/lib/oracle/11.2/client64/bin/sqlplus /@AVAYAPDSDB

    I get the error:

    SQL * more: Production release 11.2.0.4.0 on Sat Aug 29 12:04:39 2015
    Copyright (c) 1982, 2013, Oracle.  All rights reserved.
    ERROR: ORA-28759: file open failed

    Unfortunately I have no nearby engineers who can help me to solve this problem, so I really hope in the community of the fireplace.

    Googling for some and research that can help me with everything that I realized that it is followed by the sqlplus request would be a good starting point.
    So now the trace of it looks like this:

    (1309189888) [29 AUGUST 2015 12:04:39:133] - TRACING CONFIGURATION INFORMATION BELOW.
    [29 August 2015 12:04:39:133] (1309189888) new flow path is /tmp/ora/cli_30063.trc
    [29 August 2015 12:04:39:133] (1309189888) new trace level is 16
    (1309189888) [29 AUGUST 2015 12:04:39:133] - TRACE CONFIGURATION INFORMATION ENDS.
    (1309189888) [29 AUGUST 2015 12:04:39:133] - INFORMATION ABOUT THE SOURCE SETTINGS FOLLOW -
    Charge of attempt [29 August 2015 12:04:39:133] (1309189888) the system of pfile source /usr/lib/oracle/11.2/client64/network/admin/sqlnet.ora
    The parameter source [29 August 2015 12:04:39:133] (1309189888) load successfully
    (1309189888) [29 AUGUST 2015 12:04:39:133]
    (1309189888) charge of attempt [29 August 2015 12:04:39:133] pfile file source /root/.sqlnet.ora local
    The parameter source [29 August 2015 12:04:39:133] (1309189888) load successfully
    (1309189888) [29 AUGUST 2015 12:04:39:133]
    (1309189888) [29 August 2015 12:04:39:133]-> PARAMETER TABLE BURDEN RESULTS FOLLOW < -.
    Load of table (1309189888) [29 August 2015 12:04:39:133] Successful parameter
    (1309189888) [29 August 2015 12:04:39:133]-> PARAMETER TABLE has THE FOLLOWING CONTENT < -.
    (1309189888) [29 AUGUST 2015 12:04:39:133] SSL_SERVER_DN_MATCH = FALSE
    (1309189888) [29 AUGUST 2015 12:04:39:133] DIAG_ADR_ENABLED = OFF
    (1309189888) [29 AUGUST 2015 12:04:39:133] SSL_CIPHER_SUITES = (SSL_RSA_EXPORT_WITH_RC4_40_MD5)
    (1309189888) [29 AUGUST 2015 12:04:39:133] TRACE_LEVEL_CLIENT = SUPPORT
    (1309189888) [29 AUGUST 2015 12:04:39:133] SSL_VERSION = 0
    (1309189888) [29 AUGUST 2015 12:04:39:133] SQLNET. WALLET_OVERRIDE = TRUE
    (1309189888) [29 AUGUST 2015 12:04:39:133] NAMES. DIRECTORY_PATH = (TNSNAMES)
    (1309189888) [29 AUGUST 2015 12:04:39:133] SQLNET. AUTHENTICATION_SERVICES = (TCPS, DOB)
    (1309189888) [29 August 2015 12:04:39:133] WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = / usr/lib/oracle/11.2/client64/network/admin/AVAYAPDSDB)))
    (1309189888) [29 August 2015 12:04:39:133] TRACE_DIRECTORY_CLIENT = / tmp/ora
    (1309189888) [29 AUGUST 2015 12:04:39:133] SSL_CLIENT_AUTHENTICATION = TRUE
    (1309189888) [29 AUGUST 2015 12:04:39:133] - INFORMATION ABOUT PARAMETERS SOURCE ENDS.
    (1309189888) [29 AUGUST 2015 12:04:39:133] - LOG CONFIGURATION INFORMATION BELOW.
    The log stream [29 August 2015 12:04:39:133] (1309189888) will be "/ usr/lib/oracle/11.2/client64/sqlnet.log".
    (1309189888) [29 August 2015 12:04:39:133] Log stream validation not asked
    (1309189888) [29 AUGUST 2015 12:04:39:133] - LOG CONFIGURATION INFORMATION ENDS.
    (1309189888) [29 August 2015 12:04:39:134] nlstdipi: entry
    (1309189888) nlstdipi [12:04:39:134 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:134] final: entry
    (1309189888) final [12:04:39:134 August 29, 2015]: number in the overall area of NL is now 1
    (1309189888) final [12:04:39:134 August 29, 2015]: count in the region of gbl OR now: 1
    (1309189888) [29 August 2015 12:04:39:134] nrigbi: entry
    (1309189888) [29 August 2015 12:04:39:134] nrigbni: entry
    (1309189888) nrigbni [August 29, 2015 12:04:39:134]: could not get the file tnsnav.ora navigation data
    (1309189888) final [12:04:39:135 August 29, 2015]: count in the region of gbl OR now: 3
    (1309189888) final [12:04:39:135 August 29, 2015]: output
    (1309189888) niqname [12:04:39:135 August 29, 2015]: using nnfsn2a() to build connect database descriptor (possibly remote).
    (1309189888) [29 August 2015 12:04:39:135] nnfgiinit: entry
    (1309189888) [29 August 2015 12:04:39:135] nncpcin_maybe_init: default name server's domain is [root]
    (1309189888) nnfgiinit [12:04:39:135 August 29, 2015]: installation read the path
    (1309189888) [29 August 2015 12:04:39:136] nnfgsrsp: entry
    (1309189888) nnfgsrsp [12:04:39:136 August 29, 2015]: get the path of names.directory_path parameter or native_names.directory_path
    (1309189888) [29 August 2015 12:04:39:136] nnfgsrdp: entry
    (1309189888) nnfgsrdp [12:04:39:136 August 29, 2015]: path setting:
    (1309189888) nnfgsrdp [12:04:39:136 August 29, 2015]: check TNSNAMES element
    (1309189888) [29 August 2015 12:04:39:136] nnfgsrdp: defined path
    (1309189888) [29 August 2015 12:04:39:136] nnfun2a: entry
    (1309189888) [29 August 2015 12:04:39:136] nlolgobj: entry
    (1309189888) [29 August 2015 12:04:39:136] nnfgrne: entry
    (1309189888) nnfgrne [12:04:39:136 August 29, 2015]: go read if path adapters
    (1309189888) nnfgrne [12:04:39:136 August 29, 2015]: switching adapter TNSNAMES
    (1309189888) [29 August 2015 12:04:39:136] nnftboot: entry
    (1309189888) [29 August 2015 12:04:39:136] nlpaxini: entry
    (1309189888) nlpaxini [12:04:39:136 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:136] nnftmlf_make_local_addrfile: entry
    (1309189888) nnftmlf_make_local_addrfile [12:04:39:136 August 29, 2015]: failure of the construction of the local names file
    (1309189888) [29 August 2015 12:04:39:136] nnftmlf_make_system_addrfile: entry
    (1309189888) nnftmlf_make_system_addrfile [12:04:39:136 August 29, 2015]: file system names is /usr/lib/oracle/11.2/client64/network/admin/tnsnames.ora
    (1309189888) nnftmlf_make_system_addrfile [12:04:39:136 August 29, 2015]: output
    (1309189888) nnftboot [12:04:39:136 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:136] nnftrne: entry
    (1309189888) nnftrne [12:04:39:136 August 29, 2015]: original name: AVAYAPDSDB
    (1309189888) [29 August 2015 12:04:39:136] nnfttran: entry
    (1309189888) nncpdpt_dump_ptable [12:04:39:136 August 29, 2015]:---/usr/lib/oracle/11.2/client64/network/admin/tnsnames.ora TABLE has THE CONTENT FOLLOWING.
    (1309189888) [29 August 2015 12:04:39:136] nncpdpt_dump_ptable: AVAYAPDSDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCPS)(HOST = ccpdsdko) (PORT = 2484))) (CONNECT_DATA = (SERVICE_NAME = orastd)))
    (1309189888) nncpdpt_dump_ptable [12:04:39:136 August 29, 2015]: - END TABLE /usr/lib/oracle/11.2/client64/network/admin/tnsnames.ora -
    (1309189888) nnfttran [12:04:39:136 August 29, 2015]: output
    (1309189888) nnftrne [12:04:39:136 August 29, 2015]: using address tnsnames.ora (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCPS)(HOST = ccpdsdko) (PORT = 2484))) (CONNECT_DATA = (SERVICE_NAME = orastd))) for name AVAYAPDSDB
    (1309189888) nnftrne [12:04:39:136 August 29, 2015]: output
    (1309189888) nigtrm [12:04:39:137 August 29, 2015]: count in the world NOR is size now 2
    (1309189888) nigtrm [12:04:39:137 August 29, 2015]: count in the global area of NL is now 2
    (1309189888) [29 August 2015 12:04:39:137] final: entry
    (1309189888) final [12:04:39:137 August 29, 2015]: count in the global area of NL is now 3
    (1309189888) final [12:04:39:137 August 29, 2015]: count in the region of gbl OR now: 3
    (1309189888) final [12:04:39:137 August 29, 2015]: output
    (1309189888) niqname [12:04:39:137 August 29, 2015]: using nnfsn2a() to build connect database descriptor (possibly remote).
    (1309189888) [29 August 2015 12:04:39:137] nnfun2a: entry
    (1309189888) [29 August 2015 12:04:39:137] nlolgobj: entry
    (1309189888) [29 August 2015 12:04:39:137] nnfgrne: entry
    (1309189888) nnfgrne [12:04:39:137 August 29, 2015]: go read if path adapters
    (1309189888) nnfgrne [12:04:39:137 August 29, 2015]: switching adapter TNSNAMES
    (1309189888) [29 August 2015 12:04:39:137] nnftrne: entry
    (1309189888) nnftrne [12:04:39:137 August 29, 2015]: original name: AVAYAPDSDB
    (1309189888) [29 August 2015 12:04:39:137] nnfttran: entry
    (1309189888) nncpdpt_dump_ptable [12:04:39:137 August 29, 2015]:---/usr/lib/oracle/11.2/client64/network/admin/tnsnames.ora TABLE has THE CONTENT FOLLOWING.
    (1309189888) [29 August 2015 12:04:39:137] nncpdpt_dump_ptable: AVAYAPDSDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCPS)(HOST = ccpdsdko) (PORT = 2484))) (CONNECT_DATA = (SERVICE_NAME = orastd)))
    (1309189888) nncpdpt_dump_ptable [12:04:39:137 August 29, 2015]: - END TABLE /usr/lib/oracle/11.2/client64/network/admin/tnsnames.ora -
    (1309189888) nnfttran [12:04:39:137 August 29, 2015]: output
    (1309189888) nnftrne [12:04:39:137 August 29, 2015]: using address tnsnames.ora (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCPS)(HOST = ccpdsdko) (PORT = 2484))) (CONNECT_DATA = (SERVICE_NAME = orastd))) for name AVAYAPDSDB
    (1309189888) nnftrne [12:04:39:137 August 29, 2015]: output
    (1309189888) nlolfmem [12:04:39:137 August 29, 2015]: output
    (1309189888) nigtrm [12:04:39:137 August 29, 2015]: count in the world NOR is size now 2
    (1309189888) nigtrm [12:04:39:137 August 29, 2015]: count in the global area of NL is now 2
    (1309189888) [29 August 2015 12:04:39:137] final: entry
    (1309189888) final [12:04:39:137 August 29, 2015]: count in the global area of NL is now 3
    (1309189888) final [12:04:39:137 August 29, 2015]: count in the region of gbl OR now: 3
    (1309189888) final [12:04:39:137 August 29, 2015]: output
    (1309189888) niqname [12:04:39:137 August 29, 2015]: HST is already a NVstring.
    (1309189888) niqname [12:04:39:137 August 29, 2015]: insertion of CID.
    (1309189888) nigtrm [12:04:39:137 August 29, 2015]: count in the world NOR is size now 2
    (1309189888) nigtrm [12:04:39:137 August 29, 2015]: count in the global area of NL is now 2
    (1309189888) [29 August 2015 12:04:39:137] final: entry
    (1309189888) final [12:04:39:137 August 29, 2015]: count in the global area of NL is now 3
    (1309189888) final [12:04:39:137 August 29, 2015]: count in the region of gbl OR now: 3
    (1309189888) final [12:04:39:137 August 29, 2015]: output
    (1309189888) niqname [12:04:39:137 August 29, 2015]: HST is already a NVstring.
    (1309189888) niqname [12:04:39:137 August 29, 2015]: insertion of CID.
    (1309189888) [29 August 2015 12:04:39:137] niotns: entry
    (1309189888) [29 August 2015 12:04:39:137] niotns: niotns: set up the interrupt handler...
    (1309189888) [29 August 2015 12:04:39:137] nigsui: entry
    (1309189888) [29 August 2015 12:04:39:138] nigsui: interrupt the user value: hdl = 1, prc is 0x4f9596f0, ctx = 0x1bb32d0.
    (1309189888) nigsui [12:04:39:138 August 29, 2015]: exit (0)
    (1309189888) snsgblini [12:04:39:138 August 29, 2015]: Max no descriptors of supported is 4096
    (1309189888) snsgblini [12:04:39:138 August 29, 2015]: output
    (1309189888) snldlldl [12:04:39:138 August 29, 2015]: unable to load the shared library what
    (1309189888) snldlldl [12:04:39:138 August 29, 2015]: Err: /usr/lib/oracle/11.2/client64/lib/libnque11.so: cannot open shared object file: no such file or directory
    (1309189888) [29 August 2015 12:04:39:138] snsbitini_ts: entry
    (1309189888) snsbitini_ts [12:04:39:138 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:138] snsbitini_ts: entry
    (1309189888) snsbitini_ts [12:04:39:138 August 29, 2015]: a normal exit
    (1309189888) niotns [12:04:39:138 August 29, 2015]: don't mean to enable dead connection detection.
    (1309189888) niotns [12:04:39:138 August 29, 2015]: call address: (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = CST)(HOST=ccpdsdko) (PORT = 2484))) (CONNECT_DATA = (SERVICE_NAME = orastd) (CID = (= sqlplus PROGRAM) (HOST = cc - allplus.msk.vtb24.ru)(USER=root)))
    (1309189888) [29 August 2015 12:04:39:138] nsgettrans_bystring: entry
    (1309189888) nsgettrans_bystring [12:04:39:138 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:138] nscall: entry
    (1309189888) [29 August 2015 12:04:39:138] nsmal: entry
    (1309189888) nsmal [12:04:39:138 August 29, 2015]: 272 bytes to 0x1bca160
    (1309189888) nsmal [12:04:39:138 August 29, 2015]: a normal exit
    (1309189888) nscall [12:04:39:138 August 29, 2015]: connection...
    (1309189888) nlad_expand_hst [12:04:39:138 August 29, 2015]: expansion ccpdsdko
    (1309189888) [29 August 2015 12:04:39:138] snlinGetAddrInfo: entry
    (1309189888) snlinGetAddrInfo [12:04:39:139 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:139] snlinGetNameInfo: entry
    (1309189888) snlinGetNameInfo [12:04:39:139 August 29, 2015]: output
    (1309189888) nlad_expand_hst [12:04:39:139 August 29, 2015]: adding IP 10.64.245.211
    (1309189888) [29 August 2015 12:04:39:139] snlinFreeAddrInfo: entry
    (1309189888) snlinFreeAddrInfo [12:04:39:139 August 29, 2015]: output
    (1309189888) nlad_expand_hst [12:04:39:139 August 29, 2015]: result: (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = CST)(HOST=10.64.245.211) (PORT = 2484))) (CONNECT_DATA = (SERVICE_NAME = orastd) (CID = (= sqlplus PROGRAM) (HOST = cc - allplus.msk.vtb24.ru)(USER=root)))
    (1309189888) [29 August 2015 12:04:39:139] nladini: entry
    (1309189888) nladini [12:04:39:139 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:139] nladget: entry
    (1309189888) nladget [12:04:39:139 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:139] nsmal: entry
    (1309189888) nsmal [12:04:39:139 August 29, 2015]: 171 bytes to 0x1bcbc50
    (1309189888) nsmal [12:04:39:139 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:139] nsc2addr: entry
    (1309189888) [29 August 2015 12:04:39:139] nsc2addr: (DESCRIPTION = (ADDRESS = (PROTOCOL = CST)(HOST=10.64.245.211) (PORT = 2484)) (CONNECT_DATA = (SERVICE_NAME = orastd) (CID = (= sqlplus PROGRAM) (HOST = cc - allplus.msk.vtb24.ru)(USER=root)))
    (1309189888) [29 August 2015 12:04:39:139] ntzini: entry
    (1309189888) [29 August 2015 12:04:39:139] ntzSetupConnection: entry
    (1309189888) [29 August 2015 12:04:39:139] ntzgbhapip: entry
    (1309189888) ntzgbhapip [12:04:39:139 August 29, 2015]: no value for the specified bhapi - using the default value for parameter: 'TRUE '.
    (1309189888) ntzgbhapip [12:04:39:139 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:139] nzsuppgp_get_parameter: entry
    (1309189888) nzsuppgp_get_parameter [12:04:39:139 August 29, 2015]: parameter 'trace_level_server' does not exist.
    (1309189888) nzsuppgp_get_parameter [12:04:39:139 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:140] nzsuppgp_get_parameter: entry
    (1309189888) nzsuppgp_get_parameter [August 29, 2015 12:04:39:140]: value retrieved for the parameter "trace_level_client": 0.
    (1309189888) nzsuppgp_get_parameter [12:04:39:140 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:140] nztysgs_genseed: entry
    (1309189888) [29 August 2015 12:04:39:143] nzsuppgp_get_parameter: entry
    (1309189888) nzsuppgp_get_parameter [12:04:39:143 August 29, 2015]: parameter 'ssl.renegotiate' does not exist.
    (1309189888) nzsuppgp_get_parameter [12:04:39:143 August 29, 2015]: output
    (1309189888) ntzSetupConnection [12:04:39:143 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:143] ntzSetupConnection: entry
    (1309189888) [29 August 2015 12:04:39:143] ntzgbhapip: entry
    (1309189888) ntzgbhapip [12:04:39:143 August 29, 2015]: no value for the specified bhapi - using the default value for parameter: 'TRUE '.
    (1309189888) ntzgbhapip [12:04:39:143 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:143] nzsuppgp_get_parameter: entry
    (1309189888) nzsuppgp_get_parameter [12:04:39:143 August 29, 2015]: parameter 'trace_level_server' does not exist.
    (1309189888) nzsuppgp_get_parameter [12:04:39:143 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:143] nzsuppgp_get_parameter: entry
    (1309189888) nzsuppgp_get_parameter [August 29, 2015 12:04:39:143]: value retrieved for the parameter "trace_level_client": 0.
    (1309189888) nzsuppgp_get_parameter [12:04:39:143 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:143] nztysgs_genseed: entry
    (1309189888) [29 August 2015 12:04:39:146] nzsuppgp_get_parameter: entry
    (1309189888) nzsuppgp_get_parameter [12:04:39:146 August 29, 2015]: parameter 'ssl.renegotiate' does not exist.
    (1309189888) nzsuppgp_get_parameter [12:04:39:146 August 29, 2015]: output
    (1309189888) ntzSetupConnection [12:04:39:146 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:146] ntzcsgtab: entry
    (1309189888) ntzcsgtab [12:04:39:146 August 29, 2015]: output
    (1309189888) ntzini [12:04:39:146 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:146] nttbnd2addr: entry
    (1309189888) [29 August 2015 12:04:39:146] snlinGetAddrInfo: entry
    (1309189888) snlinGetAddrInfo [12:04:39:146 August 29, 2015]: output
    (1309189888) nttbnd2addr [12:04:39:146 August 29, 2015]: using the IP host address: 10.64.245.211
    (1309189888) [29 August 2015 12:04:39:146] snlinFreeAddrInfo: entry
    (1309189888) [29 August 2015 12:04:39:147] nsopenalloc_nsntx: nlhthput on mplx_ht_nsgbu:ctx = 1bd5fe0, nsntx = 1bd6610
    (1309189888) nsopenmplx [12:04:39:147 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:147] nsopen: transport of opening...
    (1309189888) [29 August 2015 12:04:39:147] ntzconnect: entry
    (1309189888) [29 August 2015 12:04:39:147] ntzCreateConnection: entry
    (1309189888) [29 August 2015 12:04:39:147] nttcon: entry
    (1309189888) nttcon [12:04:39:147 August 29, 2015]: toc = 1
    (1309189888) [29 August 2015 12:04:39:147] nttcnp: entry
    (1309189888) nttcnp [12:04:39:147 August 29, 2015]: creates a socket.
    (1309189888) nttcnp [12:04:39:147 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:147] nttcni: entry
    (1309189888) nttcni [12:04:39:147 August 29, 2015]: Tcp conn timeout = 60000 (ms)
    (1309189888) nttcni [12:04:39:147 August 29, 2015]: TCP Connect TO enabled. Switch to the n. b.
    (1309189888) [29 August 2015 12:04:39:147] nttctl: entry
    (1309189888) nttctl [12:04:39:147 August 29, 2015]: definition of connection in non-blocking mode
    (1309189888) [29 August 2015 12:04:39:147] nttcni: try to connect to the Socket 4.
    (1309189888) [29 August 2015 12:04:39:147] ntt2err: entry
    (1309189888) ntt2err [12:04:39:147 August 29, 2015]: output
    (1309189888) ntctst [12:04:39:147 August 29, 2015]: NTTEST list size is 1 - survey not call
    (1309189888) sntpoltst [12:04:39:147 August 29, 2015]: no conn for test 1, timeout for 60
    (1309189888) sntpoltst [12:04:39:147 August 29, 2015]: fd 4 need for readiness events 1
    (1309189888) sntpoltst [12:04:39:147 August 29, 2015]: fd 4 a 1 preparation of events
    (1309189888) sntpoltst [12:04:39:147 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:147] nttctl: entry
    (1309189888) nttctl [12:04:39:147 August 29, 2015]: Clearing non-blocking mode
    (1309189888) [29 August 2015 12:04:39:147] snlinGetNameInfo: entry
    (1309189888) snlinGetNameInfo [12:04:39:147 August 29, 2015]: output
    (1309189888) nttcni [12:04:39:147 August 29, 2015]: connected on ipaddr 10.64.245.240
    (1309189888) nttcni [12:04:39:147 August 29, 2015]: output
    (1309189888) nttcon [12:04:39:147 August 29, 2015]: layer NT TCP/IP connection has been set up.
    (1309189888) nttcon [12:04:39:147 August 29, 2015]: TCP_NODELAY value on 4
    (1309189888) nttcon [12:04:39:147 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:147] ntzAllocate: entry
    (1309189888) [29 August 2015 12:04:39:147] ntzAllocate: 304 bytes of memory allocation.
    (1309189888) ntzAllocate [12:04:39:147 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzConfigure: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzgsvp: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzGetStringParameter: entry
    (1309189888) ntzGetStringParameter [12:04:39:148 August 29, 2015]: find the value for the configuration parameter 'ssl_version': '0 '.
    (1309189888) ntzGetStringParameter [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzConvertToNumeric: entry
    (1309189888) ntzConvertToNumeric [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzgsvp [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzgcpp: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzAllocate: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzAllocate: allocation of 16 bytes of memory.
    (1309189888) ntzAllocate [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzGetStringParameter: entry
    (1309189888) ntzGetStringParameter [12:04:39:148 August 29, 2015]: find the value for the configuration parameter "ssl_cipher_suites": "SSL_RSA_EXPORT_WITH_RC4_40_MD5".
    (1309189888) ntzGetStringParameter [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzgcpp [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzCreateCipherSpec: entry
    (1309189888) ntzCreateCipherSpec [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzgcap: entry
    (1309189888) ntzgcap [12:04:39:148 August 29, 2015]: recovered the 'TRUE' value for the client authentication setting
    (1309189888) ntzgcap [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzConfigure [12:04:39:148 August 29, 2015]: the client authentication is required.
    (1309189888) [29 August 2015 12:04:39:148] ntzgwrl: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzgwrlFromFile: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzGetStringParameter: entry
    (1309189888) ntzGetStringParameter [12:04:39:148 August 29, 2015]: find the value for the configuration parameter "wallet_location": "SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = / usr/lib/oracle/11.2/client64/network/admin/AVAYAPDSDB)).
    (1309189888) ntzGetStringParameter [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzAllocate: entry
    (1309189888) [29 August 2015 12:04:39:148] ntzAllocate: 111 bytes of memory allocation.
    (1309189888) ntzAllocate [12:04:39:148 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:148] ntzAllocate: entry
    (1309189888) ntzAllocate [12:04:39:148 August 29, 2015]: allowing 63 bytes of memory.
    (1309189888) ntzAllocate [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzgwrlFromFile [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzlogin [12:04:39:148 August 29, 2015]: portfolio failed with error 28759
    (1309189888) ntzlogin [12:04:39:148 August 29, 2015]: error NZ 28759 to return in the result structure
    (1309189888) ntzlogin [12:04:39:148 August 29, 2015]: failed with error 540
    (1309189888) ntzlogin [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzConfigure [12:04:39:148 August 29, 2015]: failed with error 540
    (1309189888) ntzConfigure [12:04:39:148 August 29, 2015]: output
    (1309189888) ntzCreateConnection [12:04:39:149 August 29, 2015]: failed with error 540
    (1309189888) ntzconnect [12:04:39:149 August 29, 2015]: failed with error 540
    (1309189888) ntzconnect [12:04:39:149 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:149] nserror: entry
    (1309189888) [29 August 2015 12:04:39:149] nserror: nsres: id = 0, op is 65, ns = 12560, ns2 = 0; NT [0] = 540, nt [1] = 0, nt [2] = 0; ORA [0] = 28759, ora [1] = 0, ora [2] = 0
    (1309189888) nsopen [12:04:39:149 August 29, 2015]: cannot open transport
    (1309189888) [29 August 2015 12:04:39:149] snsbitts_ts: entry
    (1309189888) snsbitts_ts [12:04:39:149 August 29, 2015]: the ILO has acquired
    (1309189888) snsbitts_ts [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) nsbfr [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) nsiofrrg [12:04:39:149 August 29, 2015]: output
    (1309189888) nsiocancel [12:04:39:149 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:149] nsvntx_dei: entry
    (1309189888) nsvntx_dei [12:04:39:149 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:149] nsopenfree_nsntx: mplx_ht_nsgbu nlhthdel, ctx = 1bd5fe0 nsntx = 1bd6610
    (1309189888) [29 August 2015 12:04:39:149] nsiocancel: entry
    (1309189888) [29 August 2015 12:04:39:149] nsiofrrg: entry
    (1309189888) nsiofrrg [12:04:39:149 August 29, 2015]: output
    (1309189888) nsiocancel [12:04:39:149 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:149] snsbittrm_ts: entry
    (1309189888) snsbittrm_ts [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] snsbitts_ts: entry
    (1309189888) snsbitts_ts [12:04:39:149 August 29, 2015]: the ILO has acquired
    (1309189888) snsbitts_ts [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] snsbitcl_ts: entry
    (1309189888) snsbitcl_ts [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] nsmfr: entry
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: 2760 bytes to 0x1bd6610
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] nsmfr: entry
    (1309189888) [29 August 2015 12:04:39:149] nsmfr: 1 576 bytes to 0x1bd5fe0
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) nsopen [12:04:39:149 August 29, 2015]: output error
    (1309189888) [29 August 2015 12:04:39:149] nsclose: entry
    (1309189888) nsclose [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] nladget: entry
    (1309189888) nladget [12:04:39:149 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:149] nsmfr: entry
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: 171 bytes to 0x1bcbc50
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] nsmfr: entry
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: 272 bytes to 0x1bca160
    (1309189888) nsmfr [12:04:39:149 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:149] nladtrm: entry
    (1309189888) nladtrm [12:04:39:149 August 29, 2015]: output
    (1309189888) nscall [12:04:39:149 August 29, 2015]: output error
    (1309189888) nioqper [12:04:39:149 August 29, 2015]: error in nscall
    (1309189888) [29 August 2015 12:04:39:149] nioqper: main ns an error in code: 12560
    (1309189888) [29 August 2015 12:04:39:149] nioqper: ns (2) error code: 0
    (1309189888) [29 August 2015 12:04:39:149] nioqper: nt main err in code: 540
    (1309189888) [29 August 2015 12:04:39:149] nioqper: nt (2) error code: 0
    (1309189888) [29 August 2015 12:04:39:149] nioqper: nt OS err code: 0
    (1309189888) [29 August 2015 12:04:39:149] niomapnserror: entry
    (1309189888) [29 August 2015 12:04:39:149] niqme: entry
    (1309189888) niqme [12:04:39:149 August 29, 2015]: error ORA-28759
    (1309189888) niqme [12:04:39:150 August 29, 2015]: output
    (1309189888) niomapnserror [12:04:39:150 August 29, 2015]: output
    (1309189888) niotns [12:04:39:150 August 29, 2015]: unable to connect, turning 28759
    (1309189888) niotns [12:04:39:150 August 29, 2015]: output
    (1309189888) [29 August 2015 12:04:39:150] nigcui: entry
    (1309189888) nigcui [12:04:39:150 August 29, 2015]: Clr user interrupt: hdl = 1, prc is 0x4f9596f0, ctx = 0x1bb32d0.
    (1309189888) nigcui [12:04:39:150 August 29, 2015]: exit (0)
    (1309189888) [29 August 2015 12:04:39:150] snsbittrm_ts: entry
    (1309189888) snsbittrm_ts [12:04:39:150 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:150] snsbittrm_ts: entry
    (1309189888) snsbittrm_ts [12:04:39:150 August 29, 2015]: a normal exit
    (1309189888) [29 August 2015 12:04:39:150] nsbfrfl: entry
    (1309189888) [29 August 2015 12:04:39:150] nsbrfr: entry
    (1309189888) [29 August 2015 12:04:39:150] nsbrfr: nsbfs to 0x1bcbfe0, to 0x1bd70e0.
    (1309189888) nsbrfr [12:04:39:150 August 29, 2015]: a normal exit
    (1309189888) nsbfrfl [12:04:39:150 August 29, 2015]: a normal exit
    (1309189888) nigtrm [12:04:39:150 August 29, 2015]: count in the world NOR is size now 2
    (1309189888) nigtrm [12:04:39:150 August 29, 2015]: count in the global area of NL is now 2


    PLEEEEASE! anyone! I see that only one .so file is missing. I can't find anything on the same trouble anywhere, perhaps with other .so files and the recommendations should make symbolic links as follows:

    / usr/sbin/semanage fcontext - a-t - textrel_shlib_t $ORACLE_HOME/lib/libnque11.so

    but the system write that I don't "semanage. Any suggestion would be so appreciated!


    my /usr/lib/oracle/11.2/client64/network/admin/tnsnames.ora

    AVAYAPDSDB =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL =)(HOST = ccpdsdko) (PORT = 2484) TCPS)
    )
    (CONNECT_DATA =
    (SERVICE_NAME = orastd)
    )
    )


    my /usr/lib/oracle/11.2/client64/network/admin/sqlnet.ora

    SQLNET. AUTHENTICATION_SERVICES = (TCPS, DOB)
    SSL_VERSION = 0
    SSL_CLIENT_AUTHENTICATION = TRUE
    NAMES. DIRECTORY_PATH = (TNSNAMES)
    SSL_SERVER_DN_MATCH = FALSE
    SSL_CIPHER_SUITES = (SSL_RSA_EXPORT_WITH_RC4_40_MD5)
    WALLET_LOCATION =
    (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
    (DIRECTORY = / usr/lib/oracle/11.2/client64/network/admin/AVAYAPDSDB/wallet)
    )
    )

    SQLNET. WALLET_OVERRIDE = TRUE

    DIAG_ADR_ENABLED = OFF
    TRACE_LEVEL_CLIENT = SUPPORT
    TRACE_DIRECTORY_CLIENT = / tmp/ora

    There are 2 problems that I was fighting with. First was that the url of the portfolio was pointing to the directory level 1 above than where my files were placed, and second, I used a short connection syntax.
    I was with this:

    # /usr/lib/oracle/11.2/client64/bin/sqlplus /@AVAYAPDSDB

    And success seems to be that:

    # /usr/lib/oracle/11.2/client64/bin/sqlplus login/password@AVAYAPDSDB

    * Hope that all this can be useful for someone. The main advice from me to anyone who would be facing similar questions *: * turn followed customer! **

    Good luck!

  • Unable to search for orders/customers CCS 11.1

    Hello

    I am trying to perform search order and profile on CSC operations, but they return no results.

    I have the components /atg/commerce/textsearch/OrderOutputConfig / and /atg/userprofiling/textsearch/ProfileOutputConfig / and I found their indexing perfectly in the tables SRCH_ORDER_TOKENS and SRCH_PROFILE_TOKENS respectively.

    After you enable the loggingDebug of the two components, I found that the search query a further condition that seems related to multisite pfrmZeroRealmsAccessible, but I found that all the tokens stored in DB for orders and customers have this value pfrmdft. This is the extracted logs query:

    [++ SQLQuery ++]

    SELECT t1.id

    OF srch_order_tokens t1

    WHERE CONTAINS(t1.tokens,?,0) > 0

    -Settings-

    p [1] = {pd: tokens} pflnAhmad % AND pfrmZeroRealmsAccessible % (java.lang.String)

    [- SQLQuery-]

    Note: My application has only a single site (not multisite) but I found a few configuration files created by CIM associated with multisite which I can't delete.

    Please help me answer the following question:

    1 - is this problem really associated with configuring multisite and how solve this problem in the orders and search for customers?

    2. in trade in Oracle 11.1 how disable job with multisite?

    --

    Thank you

    Abdullah

    If you have not configured multi site, then you must change the property 'siteAccessControlOn' to false in component sub

    / atg/commerce/custsvc/environment/CSREnvironmentTools /.

    For more details, you can return to the oracle docs link below

    Oracle ATG Commerce Web - Site access control

Maybe you are looking for

  • What is the "Default Browser"plug-in? 537 Helper

    I went to check if my plug-ins have been updated today and noticed a new that I did not knowingly install: 'Default Browser Helper 537. I can't find any information on this topic, and when I click to see if the plug-ins are up-to-date, it is said tha

  • TestStand training issues

    Hi all I have been using LabView for a couple of years now, and recently took a job where they use TestStand. I downloaded the TestStand 1 instructions for self-training and exercises, but I'm having some trouble. In the manual of the exercise, stude

  • Download... Zlib.dll is missing on your computer?

    Hi all, new to this, so you can be patient if I ask an easy question 'inept '...  I use my computer mainly for games. As in... Play Casinos. Im having trouble with a specific download, it used open after downloading, tells me Im zlib.dll missing from

  • XP, updates Auto, KB951847 and error code 0 x 643

    I posted yesterday, adding to a similar error but different number of KB and which was released several months ago.  I followed all the links I can find (and they are many), but probably like many novice charms, I don't like mess with stuff and possi

  • HP 15 DO54 SQ, 3F0 ERROR AFTER UPGRADING BIOS