Checking Configuration Pmode RDM
How can I use PowerCLI to check if pmode RDM are mapped / zoned properly everywhere in a cluster, so that they can run on any host and always see the LUN they need to connect to?
Actually it checks both, and it lists to a node ESXi visible LUNS that are not in use.
Tags: VMware
Similar Questions
-
WebCenter Portal 11.1.1.9.2 has been installed on a single node and configured using external policy based JPS Sotre 11.1.1.7 OID LDAP and Oracle Access Manager 11.1.2.2.0 for Single Sign-On.
For WebCenter Portal managed starting the server (and all the other managed servers, Portlet, Collaboration, utilities, etc.) the following error message is recorded in the log files:
<Oct 26, 2015 10:35:32 AM COT> <Warning> <oracle.jps.idmgmt> <JPS-01520> <Cannot initialize identity store, cause: oracle.security.idm.ConfigurationException: Failed to connect to directory. Check configuration information..> <Oct 26, 2015 10:35:32 AM COT> <Error> <oracle.adf.mbean.share.connection.ConnectionsHelper> <BEA-000000> <Failed to get credentials for alias ADF and connection name PageletConnection java.lang.RuntimeException: java.security.PrivilegedActionException: oracle.security.jps.service.idstore.IdentityStoreException: JPS-01520: Cannot initialize identity store, cause: oracle.security.idm.ConfigurationException: Failed to connect to directory. Check configuration information.. at oracle.adf.share.security.providers.jps.JpsUtil.getDefaultIdentityStore(JpsUtil.java:386) at oracle.adf.share.security.providers.jps.JpsUtil.getDefaultIdentityStore(JpsUtil.java:363) at oracle.adf.share.security.providers.jps.JpsUtil.getUserUniqueIdentifier(JpsUtil.java:272) at oracle.adf.share.security.providers.jps.JpsUtil.getUserUniqueIdentifier(JpsUtil.java:233) at oracle.adf.share.security.providers.jps.CSFCredentialStore.getCurrentUserUniqueID(CSFCredentialStore.java:1253) at oracle.adf.share.security.providers.jps.CSFCredentialStore.fetchCredential(CSFCredentialStore.java:489) at oracle.adf.share.security.providers.jps.CSFCredentialStore.fetchCredential(CSFCredentialStore.java:653) at oracle.adf.share.security.credentialstore.CredentialStore.fetchCredential(CredentialStore.java:187) at oracle.adf.mbean.share.connection.ConnectionsHelper.getCredentials(ConnectionsHelper.java:208) at oracle.adf.mbean.share.connection.ReferenceHelper.getCredentials(ReferenceHelper.java:334) at oracle.adf.mbean.share.connection.ReferenceHelper.createReference(ReferenceHelper.java:299) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.registerBean(ConnectionsRuntimeMXBeanImpl.java:499) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.createConnection(ConnectionsRuntimeMXBeanImpl.java:577) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.configObjectReloaded(ConnectionsRuntimeMXBeanImpl.java:778) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.postRegister(ConnectionsRuntimeMXBeanImpl.java:1089) at oracle.as.jmx.framework.standardmbeans.spi.OracleStandardEmitterMBean.doPostRegister(OracleStandardEmitterMBean.java:556) at oracle.adf.mbean.share.AdfMBeanInterceptor.internalPostRegister(AdfMBeanInterceptor.java:223) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.generic.spi.interceptors.DefaultMBeanInterceptor.internalPostRegister(DefaultMBeanInterceptor.java:87) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.security.jps.ee.jmx.JpsJmxInterceptor$4.run(JpsJmxInterceptor.java:605) at java.security.AccessController.doPrivileged(Native Method) at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324) at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:464) at oracle.security.jps.ee.jmx.JpsJmxInterceptor.internalPostRegister(JpsJmxInterceptor.java:622) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.generic.spi.interceptors.DefaultMBeanInterceptor.internalPostRegister(DefaultMBeanInterceptor.java:87) at oracle.as.jmx.framework.generic.spi.interceptors.ContextClassLoaderMBeanInterceptor.internalPostRegister(ContextClassLoaderMBeanInterceptor.java:167) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.generic.spi.interceptors.DefaultMBeanInterceptor.internalPostRegister(DefaultMBeanInterceptor.java:87) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.standardmbeans.spi.OracleStandardEmitterMBean.postRegister(OracleStandardEmitterMBean.java:521) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$27.run(WLSMBeanServerInterceptorBase.java:714) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.registerMBean(WLSMBeanServerInterceptorBase.java:709) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.registerMBean(JMXContextInterceptor.java:445) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$27.run(WLSMBeanServerInterceptorBase.java:712) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.registerMBean(WLSMBeanServerInterceptorBase.java:709) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.registerMBean(WLSMBeanServer.java:462) at oracle.as.jmx.framework.wls.spi.security.PrivilegedMBeanServerInterceptor$1.run(PrivilegedMBeanServerInterceptor.java:55) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at oracle.as.jmx.framework.wls.spi.security.PrivilegedMBeanServerInterceptor.registerMBean(PrivilegedMBeanServerInterceptor.java:60) at oracle.adf.mbean.share.connection.ADFConnectionLifeCycleCallBack.contextInitialized(ADFConnectionLifeCycleCallBack.java:111) at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:481) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:181) at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1871) at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3173) at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1527) at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:486) at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:425) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:119) at weblogic.application.internal.flow.ScopedModuleDriver.start(ScopedModuleDriver.java:200) at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:247) at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:425) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:119) at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:27) at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:671) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:212) at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:59) at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:161) at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:80) at weblogic.deploy.internal.targetserver.BasicDeployment.activate(BasicDeployment.java:187) at weblogic.deploy.internal.targetserver.BasicDeployment.activateFromServerLifecycle(BasicDeployment.java:379) at weblogic.management.deploy.internal.DeploymentAdapter$1.doActivate(DeploymentAdapter.java:51) at weblogic.management.deploy.internal.DeploymentAdapter.activate(DeploymentAdapter.java:200) at weblogic.management.deploy.internal.AppTransition$2.transitionApp(AppTransition.java:30) at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:261) at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:220) at weblogic.management.deploy.internal.ConfiguredDeployments.activate(ConfiguredDeployments.java:169) at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:123) at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:180) at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:96) at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:263) at weblogic.work.ExecuteThread.run(ExecuteThread.java:221) Caused By: java.security.PrivilegedActionException: oracle.security.jps.service.idstore.IdentityStoreException: JPS-01520: Cannot initialize identity store, cause: oracle.security.idm.ConfigurationException: Failed to connect to directory. Check configuration information.. at java.security.AccessController.doPrivileged(Native Method) at oracle.adf.share.security.providers.jps.JpsUtil.getDefaultIdentityStore(JpsUtil.java:381) at oracle.adf.share.security.providers.jps.JpsUtil.getDefaultIdentityStore(JpsUtil.java:363) at oracle.adf.share.security.providers.jps.JpsUtil.getUserUniqueIdentifier(JpsUtil.java:272) at oracle.adf.share.security.providers.jps.JpsUtil.getUserUniqueIdentifier(JpsUtil.java:233) at oracle.adf.share.security.providers.jps.CSFCredentialStore.getCurrentUserUniqueID(CSFCredentialStore.java:1253) at oracle.adf.share.security.providers.jps.CSFCredentialStore.fetchCredential(CSFCredentialStore.java:489) at oracle.adf.share.security.providers.jps.CSFCredentialStore.fetchCredential(CSFCredentialStore.java:653) at oracle.adf.share.security.credentialstore.CredentialStore.fetchCredential(CredentialStore.java:187) at oracle.adf.mbean.share.connection.ConnectionsHelper.getCredentials(ConnectionsHelper.java:208) at oracle.adf.mbean.share.connection.ReferenceHelper.getCredentials(ReferenceHelper.java:334) at oracle.adf.mbean.share.connection.ReferenceHelper.createReference(ReferenceHelper.java:299) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.registerBean(ConnectionsRuntimeMXBeanImpl.java:499) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.createConnection(ConnectionsRuntimeMXBeanImpl.java:577) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.configObjectReloaded(ConnectionsRuntimeMXBeanImpl.java:778) at oracle.adf.mbean.share.connection.ConnectionsRuntimeMXBeanImpl.postRegister(ConnectionsRuntimeMXBeanImpl.java:1089) at oracle.as.jmx.framework.standardmbeans.spi.OracleStandardEmitterMBean.doPostRegister(OracleStandardEmitterMBean.java:556) at oracle.adf.mbean.share.AdfMBeanInterceptor.internalPostRegister(AdfMBeanInterceptor.java:223) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.generic.spi.interceptors.DefaultMBeanInterceptor.internalPostRegister(DefaultMBeanInterceptor.java:87) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.security.jps.ee.jmx.JpsJmxInterceptor$4.run(JpsJmxInterceptor.java:605) at java.security.AccessController.doPrivileged(Native Method) at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324) at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:464) at oracle.security.jps.ee.jmx.JpsJmxInterceptor.internalPostRegister(JpsJmxInterceptor.java:622) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.generic.spi.interceptors.DefaultMBeanInterceptor.internalPostRegister(DefaultMBeanInterceptor.java:87) at oracle.as.jmx.framework.generic.spi.interceptors.ContextClassLoaderMBeanInterceptor.internalPostRegister(ContextClassLoaderMBeanInterceptor.java:167) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.generic.spi.interceptors.DefaultMBeanInterceptor.internalPostRegister(DefaultMBeanInterceptor.java:87) at oracle.as.jmx.framework.generic.spi.interceptors.AbstractMBeanInterceptor.doPostRegister(AbstractMBeanInterceptor.java:204) at oracle.as.jmx.framework.standardmbeans.spi.OracleStandardEmitterMBean.postRegister(OracleStandardEmitterMBean.java:521) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.postRegister(DefaultMBeanServerInterceptor.java:1024) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:974) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$27.run(WLSMBeanServerInterceptorBase.java:714) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.registerMBean(WLSMBeanServerInterceptorBase.java:709) at weblogic.management.mbeanservers.internal.JMXContextInterceptor.registerMBean(JMXContextInterceptor.java:445) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$27.run(WLSMBeanServerInterceptorBase.java:712) at java.security.AccessController.doPrivileged(Native Method) at weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.registerMBean(WLSMBeanServerInterceptorBase.java:709) at weblogic.management.jmx.mbeanserver.WLSMBeanServer.registerMBean(WLSMBeanServer.java:462) at oracle.as.jmx.framework.wls.spi.security.PrivilegedMBeanServerInterceptor$1.run(PrivilegedMBeanServerInterceptor.java:55) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363) at oracle.as.jmx.framework.wls.spi.security.PrivilegedMBeanServerInterceptor.registerMBean(PrivilegedMBeanServerInterceptor.java:60) at oracle.adf.mbean.share.connection.ADFConnectionLifeCycleCallBack.contextInitialized(ADFConnectionLifeCycleCallBack.java:111) at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:481) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:181) at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1871) at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3173) at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1527) at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:486) at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:425) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:119) at weblogic.application.internal.flow.ScopedModuleDriver.start(ScopedModuleDriver.java:200) at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:247) at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:425) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:119) at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:27) at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:671) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52) at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:212) at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:59) at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:161) at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:80) at weblogic.deploy.internal.targetserver.BasicDeployment.activate(BasicDeployment.java:187) at weblogic.deploy.internal.targetserver.BasicDeployment.activateFromServerLifecycle(BasicDeployment.java:379) at weblogic.management.deploy.internal.DeploymentAdapter$1.doActivate(DeploymentAdapter.java:51) at weblogic.management.deploy.internal.DeploymentAdapter.activate(DeploymentAdapter.java:200) at weblogic.management.deploy.internal.AppTransition$2.transitionApp(AppTransition.java:30) at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:261) at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:220) at weblogic.management.deploy.internal.ConfiguredDeployments.activate(ConfiguredDeployments.java:169) at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:123) at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:180) at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:96) at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:263) at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
OID contains all users belonging to a group and can be viewed correctly using DOHAD.
Users cannot connect to the portal WebCenter or any other application of the field gets initialized because JPS store does not.
However, the JPS store gets initialized for the administration server, users and group membership can be toured from the areas of security-> users and groups to the weblogic console window.
A few days ago the users connected to the webcenter content was not asigned no role.
WebCenter star in the field of content very well, Admin Server and store of JPS is initialized correctly, users and members of the group can be seen in areas of security-> users and groups to the weblogic console window.
This error started to appear a few days before, before that, everything was normal, and users could connect to the webcenter portal group for membership and get the OID and the privileges of JPS LDAP store.
Servers werer started first, using Nodemanager script to start the server administration, and after that the administrator of the booted server, console weblogic was used to start managed servers.
Is there a way to debug the JPS Store initialization?
Hello Amey
The indicator for OAM ID Asserter is required for single sign on functionality, whatever it is, the problem, seems to be communication with the DNS server, which makes a delay that could be verified using traceroute and ping commands.
This delay caused the connection failure to the OID server during initialization of JPS.
As a solution, thefully qualified hostname to OID server has been configured manually to the file/etc/hosts. After this change, JPS can be initialized correctly.
Howerver, that the log shows no time-out or any other exception during the initialziation, making diagnosis difficult to obtain.
Thanks for your help.
-
WebCenter content 11.1.1.8.7 has been installed on a single node and configured by using the security provider of external JPS using OID 11.1.1.7
WebCenter content area uses OAM 11 GR 2 as a single sign on the mechanics and the DIO as authentication providers
OID contains all users belonging to a group and can be viewed correctly using DOHAD.
A few days ago the users connected to the webcenter content was not asigned no role.
WebCenter star in the field of content very well, Admin Server and store of JPS is initialized correctly, users and members of the group can be seen in areas of security-> users and groups to the weblogic console window.
However, when starting a webcenter content managed server, the following message appears:
<JPS-01520> <Cannot initialize identity store, cause: Failed to connect to directory. Check configuration information..>
And users get only the default authenticated roles.
Where he should have been granted the role administrators ECM, sysadmin and admin role, because of the map of credentials configured in webcenter content server
In the providers section, JpsUserProvider is down
and using the test function, the following error displays to webcenter content records:
<Oct 23, 2015 11:48:41 AM COT> <Error> <oracle.ucm.idccs> <UCM-CS-000001> <general exception> <Oct 23, 2015 11:48:41 AM COT> <Error> <oracle.ucm.idccs> <UCM-CS-000001> <general exception intradoc.common.ServiceException: !csJpsIdentityStoreNotConfigured at idc.provider.jps.JpsUserProvider.testConnection(JpsUserProvider.java:941) at intradoc.server.proxy.ProviderStateUtils.testConnection(ProviderStateUtils.java:66) at intradoc.server.ProviderManagerService.testProvider(ProviderManagerService.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at intradoc.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:87) at intradoc.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:310) at intradoc.common.ClassHelperUtils.executeMethod(ClassHelperUtils.java:295) at intradoc.server.Service.doCodeEx(Service.java:640) at intradoc.server.Service.doCode(Service.java:595) at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1693) at intradoc.server.Service.doAction(Service.java:566) at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1483) at intradoc.server.Service.doActions(Service.java:562) at intradoc.server.ServiceRequestImplementor.executeActions(ServiceRequestImplementor.java:1415) at intradoc.server.Service.executeActions(Service.java:547) at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:751) at intradoc.server.Service.doRequest(Service.java:1976) at intradoc.server.ServiceManager.processCommand(ServiceManager.java:487) at intradoc.server.IdcServerThread.processRequest(IdcServerThread.java:265) at intradoc.idcwls.IdcServletRequestUtils.doRequest(IdcServletRequestUtils.java:1358) at intradoc.idcwls.IdcServletRequestUtils.processFilterEvent(IdcServletRequestUtils.java:1732) at intradoc.idcwls.IdcIntegrateWrapper.processFilterEvent(IdcIntegrateWrapper.java:223) at sun.reflect.GeneratedMethodAccessor219.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at idcservlet.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:88) at idcservlet.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:305) at idcservlet.common.ClassHelperUtils.executeMethodWithArgs(ClassHelperUtils.java:278) at idcservlet.ServletUtils.executeContentServerIntegrateMethodOnConfig(ServletUtils.java:1680) at idcservlet.IdcFilter.doFilter(IdcFilter.java:457) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:61) at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119) at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324) at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:460) at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103) at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171) at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:61) at oracle.security.wls.filter.SSOSessionSynchronizationFilter.doFilter(SSOSessionSynchronizationFilter.java:419) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:61) at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:163) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:61) at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:119) at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:324) at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:460) at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:103) at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:171) at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:61) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3748) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3714) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2283) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2182) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1495) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:263) at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
This behavior also affects SOA managed server and managed servers of the WebCenter Portal
The servers were turned on in the right order, the first Management Server, then content managed server
Is there a way to diagnose the reason why JPS security provider cannot be initialized?
The problem seems to play with communication with the DNS server, which makes it a delay on the resolution for entry to the OID server host name, this could be verified using traceroute and ping commands.
This delay caused the connection failure to the OID server during initialization of JPS.
As a solution, fully qualified for OID server host name has been configured manually to the file/etc/hosts. After this change, JPS can be initialized correctly.
Howerver, that the log shows no time-out or any other exception during the initialziation, making diagnosis difficult to obtain.
Thanks for your help.
-
Are there warnings when you create a cluster that will be pmode RDM allowing these virtual machines for the failover to other hosts? In an event the VM HA will be able to connect from a new host, as long as the san lun masking and zoning is correct?
The main one you already highlighted, which is a general HA and best practices vSphere, make sure all LUNS are hidden to all hosts in a cluster.
Otherwise, there is no other outside best practices and standard HA (same networks, port group names, etc.). To make sure that you have no problem, you could start your VM on each host in the cluster to confirm it is not all the problems, but that shouldn't be necessary.
-
Checking configuration of the Virtual Machine NIC
Hello people.
I put in a virtual infrastructure that separates discrete security (DMZ/Intranet/Extranet/etc.) areas using port groups. My main concern is to ensure that VM is not accidentally bridged for port groups on two separate security zones. Someone is aware of a third party or tools included that will allow me to list and checking the configuration of virtual NETWORK map of all VM in my data center? We use ESX 3.5 and Virtual Center 2.5.
Thank you...
Did you watch NetWrix VMware Reporter? It will monitor and audit of your VMware changes and much more. Have you thought about streamlining your authorization center VC so that no one else can reach your VM and more groups specified admin settings. You can then use the tasks & events to see what activities were conducted by users / specific actions. We have implemented strict access to our systems of VC and the Strip unnecessary permissions to that effect.
If you found this information useful, please consider awarding points to 'Correct' or 'useful '. Thank you!!!
Kind regards
Stefan Nguyen
VMware vExpert 2009
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
-
Best practices Check - Configuration of the Communication Service
So, we have the following here the use case...
Background:
We have a FMS instance that has several teams using multiple applications, air conditioned and have their own specific communication needs. Teams of infrastructure, such as the database or Server team are also included on this FMS. We have services configured around applications and their dependencies, so a single object will exist in several services.
To work around the lack of granularity configuration and horrible the service-based E-mail in the form, I created an event rule that queries for FSMServices affected for a given event and iterates over all the unique services, pulling on the notification settings through our way of soil and trigger Actions from the command line or appropriate accordingly EmailActions... we use of the-d in the field of shortDesc service options.
Example:
(You can ask for a detailed explanation of what do the settings, if you wish, or that the levels called - but they are just an additive representation of the levels of severity in foglight)
Here's our new problem:
We have teams who want notification on certain rules of certain severities (such as criticism), but not others. This has created the need for a 'white list' or the 'black list' of the original names of rule for the event to determine whether an event should be communicated to our NOC or paged on our teams.
My solution of thought:
We will create a new cartridge (FoglightCommunication) that contains a custom dashboard and the definition of the topology for a FSMServiceConfig object. This object contains a white list or black list for some rules should be provided for each service. This TopologyObject would also resume functionality which serve as my current shortDesc variables... Essentially, it would contain all THE information relevant to its corresponding FSMService object configuration. We have experience in creating modules advanced both in the creation of Foglight cartridge/agents/topology definitions.
The dashboard would exist to facilitate the configuration of this new object and to facilitate the visualization of the current communication service. This would also allow our team to allow the teams less educated with Foglight feature more easily and completely configure their own communication service. Empowering the team owner is always a good thing
My Question:
Did someone in the quest (it's such a name cooler than Dell) sees a problem with this? My only concern is that we could lose all our configuration information to uninstall the cartridge for a upgrade problem. He might consider a work around with an option to export/import... but it's a messy solution and a non-human evidence. Is there a way to specify the data to be persisted, even if the cartridge that has defined this topology definition is uninstalled?
I'd appreciate any comments or thoughts. Thank you!
Hi Adam.
This looks like a very useful customization. I don't see why your team should not move forward with that.
I also like the idea of building an import/export feature in your cartridge in order to preserve the configurations in case you need to uninstall the cartridge. Note that even in this case, type of custom topology that was written to the repository data Foglight will always be there (i.e. it will not be served unless you specifically request this) - so you can be able to get Foglight to save the configuration information important for you.
I encourage you to update the community on your progress on this and send questions, screencaps, etc., as needed.
Thank you!
Robert Statsinger
-
Check configuration on S170 L4 traffic monitor?
You can check that I made this connection correctly on our new S170? I'm greatly grateful in advance!
On the switch, I created 1 session of the monitor with the following command:
monitor session 1 source Fa6/0/38, 48/0/Fa2 interface
control interface of destination session 1 item in gi1/0/40
We have two 50 Mbps internet connections, firewalls are so only on 100meg (fa) ports GigE ports are at a premium and I don't want to lose the PoE ports (which are together).
Fa6/0/38 is our main firewall connection side lan to the internet. All traffic to the outside world of our local network must pass through here.
FA2/0/48 is an asa firewall failover, if for some reason, the principal is down, traffic to the outside world would be through this port through the secondary firewall.
Item in gi1/0/40 is a concert which is patched with the WSA S170 T1 port.
The WSA network > Interfaces display a L4 Traffic Monitor wiring Duplex TYPE value: T1 (In/Out)
Security Services > L4 Traffic Monitor enabled L4 traffic, and the traffic is controlled on all ports except ports web (HTTP/HTTPS). Rules are correctly updated and licensing is enabled for this feature.
So this setup correctly? Is it possible to test? Should I change the L4 monitor traffic to monitor all ports, or generally just you ports WCCP 80 / 443 of the firewall answer to all that filtering and use L4 for 'everything else '?
When I go to the L4 Traffic Monitor reports, there is no data found. Now probably because there is no suspicious activity or malware, but how can I be sure that it works?
It looks like the game properly.
I set mine to look at "Everything else."
I do not know how to test to see if it alarms...
-
failure of the preliminary checks for Oracle Real Application Cluster 12 C.
Hi all. Please, help to investigate and resolve. Thank you.
[grid@orac1 grid] $./runcluvfy.sh stage pre - crsinst - orac1, orac2 n - correction-verbose
Conducting due diligence to install cluster services
Audit accessibility of node...
Check: Accessibility of node of the node 'orac1 '.
Accessible destination node?
------------------------------------ ------------------------
orac1 yes
orac2 yes
Result: Check accessibility node from node 'orac1 '.
Verify the equivalence of the user...
Check: Equivalence for the user "grid."
Status of node name
------------------------------------ ------------------------
orac2 spent
orac1 spent
Result: Use CONTROL passed to user equivalence "grid."
Verify node connectivity...
Checking the configuration of the hosts file...
Status of node name
------------------------------------ ------------------------
orac1 spent
orac2 spent
The hosts config file verification successful
Interface of the node information to "orac1".
Name address IP subnet gateway Gateway bat HW address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.154.137.101 10.154.137.0 0.0.0.0 10.0.4.2 08:00:27:28:D1:1F 1500
eth1 10.154.138.101 10.154.138.0 0.0.0.0 10.0.4.2 08:00:27:51:E8:B9 1500
eth2 10.0.4.15 10.0.4.0 0.0.0.0 10.0.4.2 08:00:27:5 B: D7:29 1500
Interface of the node information to "orac2".
Name address IP subnet gateway Gateway bat HW address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 10.154.137.102 10.154.137.0 0.0.0.0 10.0.4.2 08:00:27:7F:C8:70 1500
eth1 10.154.138.102 10.154.138.0 0.0.0.0 10.0.4.2 08:00:27:9 D: 27:D8 1500
eth2 10.0.4.15 10.0.4.0 0.0.0.0 10.0.4.2 08:00:27:6E:92:43 1500
Check: Subnet node connectivity "10.154.137.0."
Source Destination connected?
------------------------------ ------------------------------ ----------------
orac1 [10.154.137.101] orac2 [10.154.137.102] Yes
Result: Connectivity node passed to subnet "10.154.137.0" with one or more nodes orac1, orac2
Check: The subnet '10.154.137.0' TCP connectivity
Source Destination connected?
------------------------------ ------------------------------ ----------------
orac1:10.154.137.101 orac2:10.154.137.102 failed
ERROR:
PRVF-7617: node connectivity between "orac1: 10.154.137.101" and "orac2: 10.154.137.102" failed
Result: Check failed for the '10.154.137.0' subnet TCP connectivity
Check: Subnet node connectivity "10.154.138.0."
Source Destination connected?
------------------------------ ------------------------------ ----------------
orac1 [10.154.138.101] orac2 [10.154.138.102] Yes
Result: Connectivity node passed to subnet "10.154.138.0" with one or more nodes orac1, orac2
Check: The subnet '10.154.138.0' TCP connectivity
Source Destination connected?
------------------------------ ------------------------------ ----------------
orac1:10.154.138.101 orac2:10.154.138.102 failed
ERROR:
PRVF-7617: node connectivity between "orac1: 10.154.138.101" and "orac2: 10.154.138.102" failed
Result: Check failed for the '10.154.138.0' subnet TCP connectivity
Check: Subnet node connectivity "10.0.4.0."
Source Destination connected?
------------------------------ ------------------------------ ----------------
orac1 [10.0.4.15] orac2 [10.0.4.15] Yes
Result: Connectivity node passed to subnet "10.0.4.0" with one or more nodes orac1, orac2
Check: The subnet '10.0.4.0' TCP connectivity
Source Destination connected?
------------------------------ ------------------------------ ----------------
orac1:10.0.4.15 orac2:10.0.4.15 failed
ERROR:
PRVF-7617: node connectivity between "orac1: 10.0.4.15" and "orac2: 10.0.4.15" failed
Result: Check failed for the '10.0.4.0' subnet TCP connectivity
Interfaces available on subnet "10.0.4.0" who are likely candidates for VIP are:
orac1 eth2:10.0.4.15
orac2 eth2:10.0.4.15
CAUTION:
Could not find an appropriate interface for the private interconnection range
Checking consistency of subnet mask...
Verification of consistency for the subnet mask for subnet "10.154.137.0."
Verification of consistency for the subnet mask for subnet "10.154.138.0."
Verification of consistency for the subnet mask for subnet "10.0.4.0."
Verification of consistency for the last subnet mask.
ERROR:
GLWB-1172: the IP "10.0.4.15" is on more than one interface 'eth2, eth2' nodes 'orac2, orac1 '.
Result: Check the node has no connectivity
Checking for multicast communication...
Verification of the subnet "10.154.137.0" for multicast with "224.0.0.251. multicast group communication
GLWB-11138: Interface '10.154.137.101' on 'orac1' node is not able to communicate with interface '10.154.137.101' on 'orac1' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.154.137.101' on 'orac1' node is not able to communicate with interface '10.154.137.102' on 'orac2' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.154.137.102' on 'orac2' node is not able to communicate with interface '10.154.137.101' on 'orac1' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.154.137.102' on 'orac2' node is not able to communicate with interface '10.154.137.102' on 'orac2' node to the multicast group "224.0.0.251.
Verification of the subnet "10.154.138.0" for multicast with "224.0.0.251. multicast group communication
GLWB-11138: Interface '10.154.138.101' on 'orac1' node is not able to communicate with interface '10.154.138.101' on 'orac1' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.154.138.101' on 'orac1' node is not able to communicate with interface '10.154.138.102' on 'orac2' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.154.138.102' on 'orac2' node is not able to communicate with interface '10.154.138.101' on 'orac1' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.154.138.102' on 'orac2' node is not able to communicate with interface '10.154.138.102' on 'orac2' node to the multicast group "224.0.0.251.
Verification of the subnet "10.0.4.0" for multicast with "224.0.0.251. multicast group communication
GLWB-11138: Interface '10.0.4.15' on 'orac1' node is not able to communicate with interface '10.0.4.15' on 'orac1' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.0.4.15' on 'orac1' node is not able to communicate with interface '10.0.4.15' on 'orac2' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.0.4.15' on 'orac2' node is not able to communicate with interface '10.0.4.15' on 'orac1' node to the multicast group "224.0.0.251.
GLWB-11138: Interface '10.0.4.15' on 'orac2' node is not able to communicate with interface '10.0.4.15' on 'orac2' node to the multicast group "224.0.0.251.
Checking configuration ASMLib.
Status of node name
------------------------------------ ------------------------
orac1 spent
orac2 spent
Result: Check the configuration of ASMLib passed.
Check: Total memory
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 GB 5,714 (5991520,0 KB) increased from 4 GB (4194304,0 KB)
orac1 GB 5,714 (5991520,0 KB) increased from 4 GB (4194304,0 KB)
Result: The total past memory check
Check: Available memory
Requested State available node name
------------ ------------------------ ------------------------ ----------
transmitted orac2 Go 5,2293 (5483324,0 KB) 50 MB (51200,0 KB)
transmitted orac1 Go 5,1456 (5395600,0 KB) 50 MB (51200,0 KB)
Result: Available past memory check
Check: Swap space
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 GB 6,125 (6422520,0 KB) from GB 5,714 (5991520,0 KB)
orac1 GB 6,125 (6422520,0 KB) from GB 5,714 (5991520,0 KB)
Result: Check the Swap space past
Check: Disk space for "orac2: / usr, orac2: / var, orac2: / etc, orac2: / sbin, orac2: / tmp '.
Path node name Mount point available requested State
---------------- ------------ ------------ ------------ ------------ ------------
orac2/usr / 37,8398 GB GB 1,0635 spent
orac2/var / 37,8398 GB GB 1,0635 spent
orac2/etc / 37,8398 GB GB 1,0635 spent
orac2/sbin / 37,8398 GB GB 1,0635 spent
orac2/tmp / 37,8398 GB GB 1,0635 spent
Result: Free past to check the disk space ' orac2: / usr, orac2: / var, orac2: / etc, orac2: / sbin, orac2: / tmp '.
Check: Disk space for "orac1: / usr, orac1: / var, orac1: / etc, orac1: / sbin, orac1: / tmp '.
Path node name Mount point available requested State
---------------- ------------ ------------ ------------ ------------ ------------
orac1/usr / 32,4382 GB GB 1,0635 spent
orac1/var / 32,4382 GB GB 1,0635 spent
orac1/etc / 32,4382 GB GB 1,0635 spent
orac1/sbin / 32,4382 GB GB 1,0635 spent
orac1/tmp / 32,4382 GB GB 1,0635 spent
Result: Free past to check the disk space ' orac1: / usr, orac1: / var, orac1: / etc, orac1: / sbin, orac1: / tmp '.
Check: Existence of user for "grid".
Name of State comment node
------------ ------------------------ ------------------------
orac2 past exists (1100)
orac1 past exists (1100)
Check for multiple users with the value of the UID 1100
Result: Check for multiple users with the value of the UID 1100 spent
Result: Use existence CONTROL passed to 'grid '.
Check: Existence of group 'oinstall '.
Name of State comment node
------------ ------------------------ ------------------------
orac2 passed exists
orac1 passed exists
Result: Existence of group check passed for "oinstall".
Check: Existence of group for "dba".
Name of State comment node
------------ ------------------------ ------------------------
orac2 passed exists
orac1 passed exists
Result: Existence of group check passed for "dba".
Check: the members of the user 'grid' group 'oinstall' [primary]
Name of node user is group is user group primary status
---------------- ------------ ------------ ------------ ------------ ------------
orac2 Yes Yes Yes Yes past
orac1 Yes Yes Yes Yes past
Result: Membership search for "grid" user in the group 'oinstall' [as Primary] spent
Check: Membership of the user "grid" in a group 'dba '.
Name of node user is group are users in a group situation
---------------- ------------ ------------ ------------ ----------------
orac2 Yes Yes Yes past
orac1 Yes Yes Yes past
Result: Check for user "grid" in group membership 'dba' spent
Tick: Run level
Name of the node run status level required
------------ ------------------------ ------------------------ ----------
orac2 5 3,5 passed
orac1 5 3,5 passed
Result: Run control the level of the past
Check: Hard limits for "file descriptors open maximum".
Node name Type requested State available
---------------- ------------ ------------ ------------ ----------------
4096, 65536 failed hard orac2
4096, 65536 failed hard orac1
Result: Hard limits check failed for 'file descriptors open maximum.
Check: Limits Soft for "file descriptors open maximum".
Node name Type requested State available
---------------- ------------ ------------ ------------ ----------------
1024 1024 soft orac2 spent
sweet orac1 4096 1024 spent
Result: Soft limits check passed for "file descriptors open maximum".
Check: Hard limits for "maximum user processes.
Node name Type requested State available
---------------- ------------ ------------ ------------ ----------------
46654 16384 hard orac2 spent
46654 16384 hard orac1 spent
Result: Verification of limits hard to last for "maximum user process.
Check: Limits Soft for "maximum user process.
Node name Type requested State available
---------------- ------------ ------------ ------------ ----------------
orac2 1024-2047 soft failed
orac1 1024-2047 soft failed
Result: Limits Soft check failed for 'maximum process of the user.
Control: System Architecture
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 x86_64 x86_64 spent
orac1 x86_64 x86_64 spent
Result: Verification of the system architecture past
Control: Kernel Version
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2-2.6.32 - 279.el6.x86_64 2.6.32 spent
orac1-2.6.32 - 279.el6.x86_64 2.6.32 spent
Result: Last kernel version control
Check: Parameter of kernel for 'semmsl.
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
orac1 250 250 250 spent
orac2 250 250 250 spent
Result: Check of kernel parameter passed to "semmsl.
Check: Parameter of kernel for 'semmns.
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
32000 32000 32000 orac1 spent
32000 32000 32000 orac2 spent
Result: Check kernel parameter passed to "semmns.
Check: Parameter of kernel for 'semopm.
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
orac1 100 100 100 spent
orac2 100 100 100 spent
Result: Check kernel parameter passed to "semopm.
Check: Parameter of kernel for 'semmni.
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
orac1 128 128 128 spent
orac2 128 128 128 spent
Result: Check of kernel parameter passed to "semmni.
Check: "Package" kernel parameter
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
4398046511104 4398046511104 3067658240 orac1 spent
4398046511104 4398046511104 3067658240 orac2 spent
Result: Check of kernel parameter passed to 'package '.
Check: Parameter of kernel for "shmmni(5)."
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
4096-4096-4096 orac1 spent
4096-4096-4096 orac2 spent
Result: Check of kernel parameter passed to 'shmmni(5) '.
Check: "Shmall" kernel parameter
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
4294967296 4294967296 599152 orac1 spent
4294967296 4294967296 599152 orac2 spent
Result: Check kernel parameter passed to "shmall.
Check: Parameter of kernel "file-max.
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
6815744 6815744 6815744 orac1 spent
6815744 6815744 6815744 orac2 spent
Result: Check of kernel parameter passed to 'file-max.
Check: "Ip_local_port_range" kernel parameter
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
orac1 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
orac2 between 9000 & 65500 between 9000 & 65500 between 9000 & 65535 passed
Result: Check of kernel parameter passed to «ip_local_port_range»
Check: Parameter of kernel for "rmem_default."
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
262144 262144 262144 orac1 spent
262144 262144 262144 orac2 spent
Result: Check of kernel parameter passed to 'rmem_default '.
Check: "Rmem_max" kernel parameter
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
4194304 4194304 4194304 orac1 spent
4194304 4194304 4194304 orac2 spent
Result: Check of kernel parameter passed to «rmem_max»
Check: Parameter of kernel for "wmem_default."
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
262144 262144 262144 orac1 spent
262144 262144 262144 orac2 spent
Result: Check of kernel parameter passed to 'wmem_default '.
Check: "Wmem_max" kernel parameter
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
1048576 1048576 1048576 orac1 spent
1048576 1048576 1048576 orac2 spent
Result: Check kernel parameter passed to "wmem_max.
Check: Parameter of kernel for "aio-max-nr.
Current name configured node comment of the requested State
---------------- ------------ ------------ ------------ ------------ ------------
1048576 1048576 1048576 orac1 spent
1048576 1048576 1048576 orac2 spent
Result: Check of kernel parameter passed to "aio-max-nr.
Check: Existence of package for "binutils".
Requested State available node name
------------ ------------------------ ------------------------ ----------
binutils - 2.20.51.0.2 - 5.34.el6 orac2 binutils - 2.20.51.0.2 spent
binutils - 2.20.51.0.2 - 5.34.el6 orac1 binutils - 2.20.51.0.2 spent
Result: Package of checking for "binutils".
Check: Existence of package for "compat-libcap1.
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 compat-libcap1 - 1.10 - 1 compat-libcap1 - 1.10 spent
orac1 compat-libcap1 - 1.10 - 1 compat-libcap1 - 1.10 spent
Result: Package of checking for "compat-libcap1.
Control: Existence of package for "compat-libstdc ++-33 (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 compat-libstdc ++ - 33 (x86_64) - 3.2.3 - 69.el6 compat-libstdc ++ - 33 (x86_64) - 3.2.3 spent
orac1 compat-libstdc ++ - 33 (x86_64) - 3.2.3 - 69.el6 compat-libstdc ++ - 33 (x86_64) - 3.2.3 spent
Result: Package of checking for "compat-libstdc ++-33 (x86_64).
Control: Existence of package for 'libgcc (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 libgcc (x86_64) - 4.4.7 - 11.el6 libgcc (x86_64) - 4.4.4 spent
orac1 libgcc (x86_64) - 4.4.7 - 11.el6 libgcc (x86_64) - 4.4.4 spent
Result: Package of checking for 'libgcc (x86_64).
Control: Existence of package for 'libstdc ++ (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 libstdc ++ (x86_64) - 4.4.7 - 11.el6 libstdc ++ (x86_64) - 4.4.4 spent
orac1 libstdc ++ (x86_64) - 4.4.7 - 11.el6 libstdc ++ (x86_64) - 4.4.4 spent
Result: Package of checking for 'libstdc ++ (x86_64).
Control: Existence of package for 'libstdc ++ - devel (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 libstdc ++ - devel (x86_64) - 4.4.7 - 11.el6 libstdc ++ - devel (x86_64) - 4.4.4 spent
orac1 libstdc ++ - devel (x86_64) - 4.4.7 - 11.el6 libstdc ++ - devel (x86_64) - 4.4.4 spent
Result: Package of checking for 'libstdc ++ - devel (x86_64).
Check: Existence of package for "sysstat".
Requested State available node name
------------ ------------------------ ------------------------ ----------
sysstat - 9.0.4 - sysstat - 9.0.4 spent 20.el6 orac2
sysstat - 9.0.4 - sysstat - 9.0.4 spent 20.el6 orac1
Result: Package of checking for "sysstat".
Check: Existence of package for 'gcc '.
Requested State available node name
------------ ------------------------ ------------------------ ----------
4.4.7 - gcc - 11.el6 orac2 gcc - 4.4.4 spent
4.4.7 - gcc - 11.el6 orac1 gcc - 4.4.4 spent
Result: Package of checking for "gcc".
Check: Package of existence 'gcc - c++.
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 gcc - c++ - 4.4.7 - 11.el6 gcc - c++ - 4.4.4 spent
orac1 gcc - c++ - 4.4.7 - 11.el6 gcc - c++ - 4.4.4 spent
Result: Package of checking for "gcc - c++.
Check: Existence of package for 'ksh '.
Requested State available node name
------------ ------------------------ ------------------------ ----------
ksh-20120801 - 21.el6.1 orac2 spent ksh...
ksh-20120801 - 21.el6.1 orac1 spent ksh...
Result: Package of checking for "ksh".
Check: Existence of package to 'make it '.
Requested State available node name
------------ ------------------------ ------------------------ ----------
make orac2 make - 3.81 - 20.el6 - 3.81 spent
make orac1 make - 3.81 - 20.el6 - 3.81 spent
Result: Package of verification of existence moved to 'make it '.
Control: Existence of package for 'glibc (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 glibc (x86_64)-2, 12 - 1.149.el6 glibc (x86_64)-2,12 spent
orac1 glibc (x86_64)-2, 12 - 1.149.el6 glibc (x86_64)-2,12 spent
Result: Package of checking for "glibc (x86_64).
Control: Existence of package for 'glibc-devel (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 glibc-devel (x86_64)-2, 12 - 1.149.el6 glibc-devel (x86_64)-2,12 spent
orac1 glibc-devel (x86_64)-2, 12 - 1.149.el6 glibc-devel (x86_64)-2,12 spent
Result: Package of checking for "glibc-devel (x86_64).
Control: Existence of package for "libaio (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 (x86_64) - 0.3.107 - 10.el6 libaio libaio (x86_64) - 0.3.107 spent
orac1 (x86_64) - 0.3.107 - 10.el6 libaio libaio (x86_64) - 0.3.107 spent
Result: Package of checking for "libaio (x86_64).
Control: Existence of package for "libaio-devel (x86_64).
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 libaio-devel (x86_64) - 0.3.107 - 10.el6 libaio-devel (x86_64) - 0.3.107 spent
orac1 libaio-devel (x86_64) - 0.3.107 - 10.el6 libaio-devel (x86_64) - 0.3.107 spent
Result: Package of checking for "libaio-devel (x86_64).
Check: Existence of package for 'nfs-utils '.
Requested State available node name
------------ ------------------------ ------------------------ ----------
orac2 nfs-utils - 1.2.3 - 26.el6 nfs-utils - 1.2.3 - 15 past
orac1 nfs-utils - 1.2.3 - 26.el6 nfs-utils - 1.2.3 - 15 past
Result: Package of checking for 'nfs-utils '.
Check the availability of the "6200,6100" ports required for the component 'Oracle Notification Service (ONS)'
Name of the node Port number protocol status available
---------------- ------------ ------------ ------------ ----------------
orac2 6200 TCP successful Yes
orac1 6200 TCP successful Yes
orac2 6100 TCP successful Yes
orac1 6100 TCP successful Yes
Result: Availability of Port Control passed to ports '6200,6100 '.
Check for multiple users with the value of the UID 0
Result: Check for multiple users with the value of the UID 0 past
Check: The current group ID
Result: Control of current past group ID
Check the consistency of the main group of the root user from
Status of node name
------------------------------------ ------------------------
orac2 spent
orac1 spent
Check the consistency of the primary group of the user root past
Starting synchronization check by using the Network Time Protocol (NTP)...
Of NTP Configuration file check has started...
The NTP configuration file "/ etc/ntp.conf" is available on all nodes
Of NTP Configuration file check passed
Checking delay of the devil...
Check: Time for 'ntpd '.
Node name race?
------------------------------------ ------------------------
orac2 yes
orac1 yes
Result: Check time-out for 'ntpd '.
Check for NTP daemon or service life spent on all nodes
Check if NTP daemon or service using the 123 UDP port on all nodes
Search the NTP daemon or service uses the port UDP 123
Name of the node Port open?
------------------------------------ ------------------------
orac2 yes
orac1 yes
Common NTP Time Server checks began...
Time server NTP "193.27.209.1" is common to all nodes on which runs the NTP daemon
Time server NTP "194.29.130.252" is common to all nodes on which runs the NTP daemon
Registration of time server common NTP spent
The clock offset record time server NTP began...
Audit on the nodes '[orac2, orac1] "...
Check: The Offset of the time clock to the time server NTP
Time server: 193.27.209.1
Time limit offset: 1000,0 ms
Name of node time shift status
------------ ------------------------ ------------------------
orac2 - 355.25 spent
orac1 - 124.19 spent
Time server "193.27.209.1" has time shifts that are within acceptable limits for the nodes "[orac2, orac1].
Time server: 194.29.130.252
Time limit offset: 1000,0 ms
Name of node time shift status
------------ ------------------------ ------------------------
orac2 - 358.87 spent
orac1 - 109.63 spent
Time server "194.29.130.252" has time shifts that are within acceptable limits for the nodes "[orac2, orac1].
The clock offset check past
Result: Control of synchronization using Network Time Protocol (NTP) spent
Base file name consistency check model...
Check the consistency of spent Core file name pattern.
Check to make sure that the user "grid" is not in the group "root".
Name of State comment node
------------ ------------------------ ------------------------
There is no such thing as orac2 spent
There is no such thing as orac1 spent
Result: User "grid" is not part of the "root" group Check the past
Audit default user file creation mask
Name of node available comment required
------------ ------------------------ ------------------------ ----------
0022-0022 orac2 spent
0022-0022 orac1 spent
Result: Mask by default user file creation passed Verification
Checking the integrity of the file "/ /etc/resolv.conf ' overall of nodes
Check the file "/ /etc/resolv.conf" to ensure that the only area and search for entries is defined
CAUTION:
PRVF-5640: both search and domain entries are present in the file ' / /etc/resolv.conf ' on the following nodes: orac1, orac2
Checking if entry field in the file ' / /etc/resolv.conf ' is constant in knots...
Check the file "/ /etc/resolv.conf ' to ensure that single domain entry is defined
There is no more than a 'domain' entry in any ' / /etc/resolv.conf ' file
All nodes have the same 'domain' entry defined in the file "/ /etc/resolv.conf.
If control search entry in the file "/ /etc/resolv.conf ' is constant in knots...
Check the file "/ /etc/resolv.conf ' to ensure that research that one entry is defined
Several topics 'research' does not exist in any ' / /etc/resolv.conf ' file
All nodes have the same order of 'search' defined in the file "/ /etc/resolv.conf.
Checking the time of DNS response for an unreachable node
Status of node name
------------------------------------ ------------------------
orac1 spent
orac2 spent
The time of DNS response for an unreachable node is consistent with the acceptable limit on all nodes
Check the integrity of the file "/ /etc/resolv.conf ' past
Check: consistency of zone
Result: the zone past consistency check
Verifying the integrity of the name service switch configuration file ' / /etc/nsswitch.conf '...
Check if "hosts" entry in the file "/ /etc/nsswitch.conf" is consistent in all nodes...
Check the file "/ /etc/nsswitch.conf" to ensure that the single entry "hosts" is set
More of an entry "hosts" does not none "/ /etc/nsswitch.conf ' file
All nodes have even entered 'hosts' defined in the file "/ /etc/nsswitch.conf.
Check the integrity of the name service switch configuration file ' / /etc/nsswitch.conf ' past
Control the demon "avahi-daemon" is not configured and running
Check: Daemon "avahi-daemon' not configured
Name of the configured node status
------------ ------------------------ ------------------------
orac2 Yes failed
orac1 Yes failed
Demon not configured has no process "avahi-daemon."
Check: Daemon "avahi-daemon" does not
Node name race? Status
------------ ------------------------ ------------------------
orac2 Yes failed
orac1 Yes failed
Demon of the control does not have to process "avahi-daemon" works not
From check for/dev/SHM mounted as a temporary file system...
Check for/dev/shm mounted as a temporary last file system
******************************************************************************************
Here is a list of pre-requisites fixable chose to fix at this session
******************************************************************************************
-------------- --------------- ----------------
Check has failed. Failed on nodes of reboot needed?
-------------- --------------- ----------------
Limit: maximum orac2 opened, orac1 not
file descriptors
The soft limit: maximum user orac2, orac1 not
process
Demon 'avahi-daemon' not orac2, orac1 not
configured and running
Run "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" as root user on the nodes 'orac1, orac2' to perform correcting operations manually
Press the ENTER key to continue after execution of ' / tmp/CVU_12.1.0.1.0_grid/runfixup.sh ' has finished on the nodes 'orac1, orac2 '.
FIX: Strict limit: file descriptors open maximum
Status of node name
------------------------------------ ------------------------
orac2 failed
orac1 failed
ERROR:
GLWB-9023: manual fixed command "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has been issued by the superuser on the node 'orac2 '.
GLWB-9023: manual fixed command "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has been issued by the superuser on the node 'orac1 '.
Result: "strict limit: file descriptors open maximum" could be fixed on nodes 'orac2, orac1 '.
FIX: Soft limit: maximum user process
Status of node name
------------------------------------ ------------------------
orac2 failed
orac1 failed
ERROR:
GLWB-9023: manual fixed command "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has been issued by the superuser on the node 'orac2 '.
GLWB-9023: manual fixed command "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has been issued by the superuser on the node 'orac1 '.
Result: "soft limit: maximum user process" could be fixed on nodes 'orac2, orac1 '.
FIX: Daemon "avahi-daemon' not configured and running
Status of node name
------------------------------------ ------------------------
orac2 failed
orac1 failed
ERROR:
GLWB-9023: manual fixed command "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has been issued by the superuser on the node 'orac2 '.
GLWB-9023: manual fixed command "/ tmp/CVU_12.1.0.1.0_grid/runfixup.sh" has been issued by the superuser on the node 'orac1 '.
Result: "Demon"avahi-daemon' not configured and running"could not set on the nodes 'orac2, orac1 '.
Difficult operations for some prerequisites fixable failed nodes 'orac2, orac1 '.
Check prior to the installation of cluster service failed on all nodes.
SOLVED!
The root cause was in iptables service works.
Just forgot to arrest him and take off chkconfig!
Issue can be closed.
-
After a new installation of:
Adobe Premiere Elements 11 Several program will not start: a configuration error: 213:11
Please notify
Cheers!
-v-Please check: Configuration error. CC, CS
-
VMware 5.1 with compatibility virtual RDM and Microsoft SQL Cluster mode
Hello
I am a bit confused by the VMWare documentation and hope someone can point me in the right direction.
I want to know if it is possible and supported to create a cluster of SQL 2008 R2 2 nodes (Server 2008 R2 SP2 are VM) on a 2 node cluster VMWare 5.1 with the use of Virtual RDM compatibility?
When you read the PDF on vsphere5.1 on the link below on page 9, there's a indicating note 'NOTE Clusters on multiple physical computers with no-pass-through RDM is supported only for Windows Server 2003 clusters. It is not supported for clustering with Windows Server 2008. »
So that means that I want "a 2-node cluster sql 2008 R2 server" is not supported?
But I also found this link below and in the table of the column on the Cluster SQL line RDM is said 'yes' with a 2.
VMware KB: Microsoft Clustering on VMware vSphere: guidelines for supported configurations
Means 2/redirects-> for more information on shared disk configurations, refer to the Disk Configurations section in this article.
- RDM: Configuration using a shared Quorum for storage or data must be on Fiber Channel (FC) based on RDM (physical cluster across boxes "CAB" mode, virtual mode for cluster in a box "IPC") in vSphere 5.1 and previous versions. RDM on storage other than CF (iSCSI and FCoE) are supported only in vSphere 5.5. However, in earlier versions, FCoE is supported in very specific configurations. For more information, see Microsoft clustering solutions table above note 4 .
What follow-up note4->
- In vSphere 5.5, native FCoE is supported. In vSphere 5.1 update 1 and 5.0 Update 3, two cluster configuration of the node with Cisco NAC (VIC-1240/1280) cards and driver version 1.5.0.8 is compatible with Windows 2008 R2 SP1 64-bit guest operating system. For more information, see the VMware hardware compatibility guide:
This means that it is suuported, I "m confused.
Hi Bypy,
Two windows 2008 R2 SQL virtual cluster nodes with RDM is supported only for the IPC or Cluster In a Box that is to say if the two virtual machines reside on the same host ESXi. The same configuration is not supported for the cabin or Cluster across boxes (virtual machines running on different hosts ESXi).
The CAB, you go for the physical RDM mode.
According to this link, http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037959
SQL Cluster using windows 2008 R2 is supported for both physical and virtual mode RDM. Between physical and virtual mode depends on whether you want the CAB or IPC respectively.
I hope this helps.
See you soon,.
Arun
-
RDM &; replication of vSphere
Hi all
I have to support a migration from one site to the other project and I need to move only computers configured with RDM mode physical model virtual.
My idea was to use the replication of vSphere and I was wondering if there is an idea on how to do it if possible.
All tips are welcome,
Daniele
Hello world
I wanted to just add the link to the official statement in the vSphere replication 5.1 / documentation Site Recovery Manager: http://pubs.vmware.com/srm-51/topic/com.vmware.srm.admin.doc/GUID-084C089D-9689-4F34-9A75-8AFB980A725E.html
"vSphere replication supports RDM devices in virtual mode only, for the device from the source and the target.
Best regards
Bjoern
-
RDM missing after reboot.
Hi all
Due to a hardware migration, we had (in maintenance mode) stop all our ESXi host - 5.1 and their VM (configured with RDM). So after the migration of the material we had lit all the ESXi host.
This is so the problem starts when we try to power on the VM (Linux OS) which mapped with RDM his error launch mapped from storage Lun is missing and that we can not only VM - the Machine running. So to solve this problem, we followed a few steps,
US planes removes the VM RDM(Hardisk-2) and powered on the Vmachine sucessfully. But after that when we try to map the RDM manually in the Vcenter, RDM option for the virtual computer machine is seems to be 'gray-out '.
Unit number logic that is mapped is visible that ESXi, but we can not able to map. We had rescaned, detached and attached the LUN but RDM option is always gray-out and this is why we cannot make RDM for this virtual machine.
Can someone please guide us to get this problem resolved...!
Thanks in advance...!
Kind regards
Subash.
When your storage space is returned to the top you will drives?
-
VMDirectpath configuration problems.
Well, I think I Passthrough (VMDirectpath) configuration problems. I use vSphere VMware ESX, 4.0.0.
I tried to configue passthrough, so I checked Configuration-Hardware-Advanced setting and I don't found "no device currently activated for passthrough.
I clicked on "configuration Passthrough.. ' and tick"Mark of Passthrough devices"then I found all USB devices are passthrough capable but not running in passthrough mode and PCI/PCI bridges are not able to passthrough.
After ignoring the warnig message ('vmnic0' assignment, which used by the virtual switch 'vSwitch0' for passthrough access can make the inaccessible host and can require a lot of effort to cancel.), I marked as the relay device, and of course I've known the significant effort to cancel. :$
I could not access the server, and the start was now available for some time.
What I want to really know is.
I'm looking for the correct devices for passthrough
and what was the Passthrough configuration problem.
Is this Client vs server problem? or materials (such as the version of the kernel or CPU specifications)?
Anyone knows the solution or has any advice, please help me.
Thank you.
Kwangwon Chae
Hi kwangwon, it seems like you just a physical network adapter on the host ESX (i) and is used by the vSwitch0 (at installation time is created inteface vswif0 here commonly called console port or management service), that's why you get this 'warning' when you try to set this for VMDirectpath pci device and also, that's why you lose your connection to your host because it can no longer be used by your esx server.
Kind regards.
If you find this information useful, please give points to "correct" / "useful."
My blog virtualizacion you en idioma
-
Shared services registration failing when configuring EPMA 11.1.1.3
Hi all, I am currently woking on essbase and any knowledge of planning so tried to update my knowledge. I installed epma 11.1.1.3 on workstation to VMware windows 2003 that I have windows 7 laptop. oracle 10g installation created a user/database (epma) patterns (sm, ms) is trying to set up the foundation, essbase, planning and reporting and analysis services. the configuration database and register with shared services has failed. I'm doing something wrong. I followed some configuration documents that are documented with screenshots are still unsure as to what I do. Please advice on how to avoid these failures.
Here are the results of the validation of the configuration.
Thanks in advance.
Srinivas
Oracle EPM system
Diagnostic reports
Generated 03/03/12 23:05
Running on validation srinivas
Validation tool Info: 11.1.1.3.0 drop 9-0 build 3129 2009-07-16 13:24
Name of the operating system: Windows 2003
Operating system version: 5.2
Status Service Test Description duration of test
Foundation of Hyperion
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
relationalStorageConfiguration: Configuration failed
Recommended action: try to set up the tasks mentioned 0s
OPN FAILED: OpenLDAP Check if global roles number of not less than 16
Error: $item.exception.localizedMessage
Recommended action: 0 s
OPN FAILED: connection OpenLDAP OpenLDAP
Error: srinivas:28089
Recommended action: open Start LDAP 1s
The ERP Integrator
CFG PAST: Setup WARNING: some even suggested not configuration tasks completed.
Check if all configuration tasks have been completed 0 s
Essbase / customer
WR PAST: Windows registry check if EQD excel addin registered for Essbase Client 0 s
Essbase / Essbase
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Recommended action: try to set up the tasks mentioned 0s
EAS FAILED: Essbase Server start the Essbase server check using the Maxl command
Error: Result: unable to connect to Essbase server by using the command MAXL. Please check the essbase server is running.
Recommended action: check Essbase server is started. 2 1
SVR FAILED: external auditor Essbase Java API launch with the following command: admin C:\Hyperion\common\validation\9.5.0.0\launchEssbaseJavaAPI.bat EssbaseJAPIConnect * http://srinivas:13080/aps/JAPI srinivas
Error: Result:-1; Error message: unable to connect to the olap service. Unable to connect to the Essbase server at srinivas. Exceeding the timeout to connect to the Essbase Agent/server using the TCP/IP protocol. Check your network connections.
Recommended action: make sure that the external auditor is working. 7 s
Essbase / Essbase Administration Services
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Some of the suggested configuration tasks not yet completed.
Recommended action: try to set up the tasks mentioned 0s
Essbase / service provider
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Recommended action: try to set up the tasks mentioned 0s
Essbase / Smart Search
CFG PAST: checking Configuration if all configuration tasks have been completed 0 s
Essbase / Studio
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Some of the suggested configuration tasks not yet completed.
Recommended action: try to set up the tasks mentioned 0s
Managing the quality of financial data
CFG PAST: Setup WARNING: some even suggested not configuration tasks completed.
Check if all configuration tasks have been completed 0 s
Foundation / calculation Manager
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Some of the suggested configuration tasks not yet completed.
Recommended action: try to set up the tasks mentioned 0s
Foundation / Performance Management architect
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Some of the suggested configuration tasks not yet completed.
Recommended action: try to set up the tasks mentioned 0s
Foundation / workspace
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
Some of the suggested configuration tasks not yet completed.
Recommended action: try to set up the tasks mentioned 0s
FROM HTTP: Http is [C:\Hyperion\common\httpServers\Apache\2.0.59\bin\installhyperionapacheservice.err] s 0 empty file
[FROM HTTP: string check Http [LoadModule jk_module modules/mod_jk] file[C:\Hyperion\common\httpServers\Apache\2.0.59\conf\httpd.conf] 0 s
PAST REG: Register all links are present in the registry. 2 1
Performance dashboard
CFG PAST: Setup WARNING: some even suggested not configuration tasks completed.
Check if all configuration tasks have been completed 0 s
Planning
A FAILED CFG: Configuration so check all configuration tasks are completed
Error: The next tasks are not configured:
hubRegistration: Configuration failed
applicationServerDeployment: Configuration failed
Some of the suggested configuration tasks not yet completed.
Recommended action: try to set up the tasks mentioned 0s
Profitability and cost management
CFG PAST: Setup WARNING: some even suggested not configuration tasks completed.
Check if all configuration tasks have been completed 0 s
Strategic Finance
CFG PAST: Setup WARNING: some even suggested not configuration tasks completed.
Check if all configuration tasks have been completed 0 s
Financial management
CFG PAST: Setup WARNING: some even suggested not configuration tasks completed.
Check if all configuration tasks have been completed 0 s
Reporting and analysis
CFG PAST: checking Configuration if all configuration tasks have been completed 0 s
SVR FAILED: external auditor Essbase Java API launch with the following command: admin C:\Hyperion\common\validation\9.5.0.0\launchEssbaseJavaAPI.bat EssbaseJAPIConnect * http://srinivas:13080/aps/JAPI srinivas
Error: Result:-1; Error message: unable to connect to the olap service. Unable to connect to the Essbase server at srinivas. Exceeding the timeout to connect to the Essbase Agent/server using the TCP/IP protocol. Check your network connections.
Recommended action: make sure that the external auditor is working. 1 1
Test start time: 23:05:18
End of test time: 23:05:40
Test duration: 21sHi Srinivas,
I suggest you to do separately from the configuration of the different components. First you configure shared services and continue with the other components.
And also using the same relational database for all components is not recommended.
Doing so will make you empty the things about it you give and would be easy to fix too.
And I'm sckeptic on your database connection.Few things to check before you start the configuration.
1. check are you can connect to the database on firewall problems.
2 and also check the timeout values defined for the database.Kind regards
Pradeep -
I need to clone several VM to a recovery site and most is configured with RDM HD how can I do this live, or what I can? ESX 3.5
If you use vCenter/VIC to clone the virtual machine, I would remove the ROW from the vm before cloning, otherwise it becomes a vmdk.
If your vm RD lack this disc RDM and you must assign a new empty ROW. Windows will probably complain of either, but after a reconfiguration, it should be ok.
If you copy to the store of data at the file level, you must change the setting of the virtual machine on the recovery site and remove/replace the drive RDM.
Arnim - van Lieshout
-
Maybe you are looking for
-
How can I fix slow shift when you type?
It's weird. In recent days, when you type anything on Facebook or other applications, including the subject of this question, I feel a very important gap between what I get and when it appears on the screen. However, as I type this question, there is
-
Hello No is-hard drive upgrade invalidates me the warranty? I have a HP Omen 15-ax003na. I have the necessary tools and have replaced other readers portable before, although this screw rubber resistant to no aesthetic damage :/
-
Satellite U400-10 has: Downgrade from Windows XP Home edition
I got my laptop U400-10 for XP. Few things left pending: 1. How can I get the LED on the top of the screen (and Satellite/power LED) to decrease intensity again by pressing the LED light on the top of the screen? Worked under vista. Cannot run Window
-
Hello useful people I just got my new Netbook from Satellite C660-15. To get rid of all the bloatware installed on the new netbook, I formatted and installed Windows 7 Ultimate. I combined to leave them hard drive and being silly did not have a prope
-
I can't open Twitter or Facebook of Safari on my iPhone.
Whenever I try to open the twitter of safari, safari gives me that an empty site there is nothing on it just (opened in the twitter application). I Googled it and they said a lot of things and I tried a few: 1. clearing safari history. 2 clear histor