cron VMware

Sorry if this sounds like a question very newbish but:

What are the commands I need to automate the stops comments, startups comments and comments restarts under CentOS 5.3? (in case it helps, the output of cat/proc/version : Linux version 2.6.18 - 128.1.1.el5 ([email protected]) (gcc 4.1.2 version 20080704 (Red Hat 4.1.2 - 44)) #1 SMP Wed Mar 25 18:14:28 EDT 2009)

Specifically, I inherited a pair of guests from server windows 2003 which must communicate with each other and, annoyingly, have a bad habit to forget how doing it on a regular basis, with the only solution established being regularly restarts. Unfortunately, the software that causes this problem is a legacy custom code and at this stage cannot be resolved (and in fact has been largely supplanted by a new equivalent on the web). At this point, I'd rather just take the software and its virtual machines entirely, but it is always necessary for the recovery of archived data, while we are moving to new software mentioned above.

So, I want to set up a scheduled task (cron job?) to the stop and power off the first guest, restart the second and then, once finished reboot, re-run the first guest.

Any help gratefully received.

 vmrun -T server -h https://localhost:8333/sdk  -u root -p  list

J

If you have found this device or any other answer useful please consider the use of buttons useful or Correct to award points.

Tags: VMware

Similar Questions

  • The execution of a command shell using vmware cli

    Is it possible to execute a command using vmware cli? I want to

    1 download a script on the server esxi - sharp put

    2. run and it would create results. - ?

    3 download results - lively get

    As demonstrated above, part 2 is not clear how. My guess is that, because of the security, it will be possible. But I want to make sure I'm not missing something.

    Let me know if you know a way out using the cli or other alternatives.

    To achieve #2 I can think of a few things, most are quite hack-ish, however:

    1. you can use plink.exe or another automateable ssh client to connect to the host and run the script.

    2. use vivid for tΘlΘcharger the/var/spool/cron/crontabs/root crontab file, change it to run the script and download it again

    3. it is a non supported VIB that adds arbitrary execution of shell to esxcli:

    http://www.v-front.de/2013/01/release-esxcli-plugin-to-run-arbitrary.html

  • VMware ESXi 5 host stops sending the syslogs to the remote server (Splunk)

    We have recently installed a Splunk syslog server and our pointing devices are to him.   I noticed that when we stop/start the server (or service) the logs of all my ESXi 5 hosts stop coming in.

    There seems to be a known problem

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 2003127

    The next step 5 restart the newspaper that circulates.  But there is no way I want to log on to the console and run it whenever someone does something in Splunk that needs a reboot or the Windows box is restarted for patches.

    I started writing a script bash (below), but discovered that ESXi has really not an area of cron as has ESX4 (not i).  If I brute force to create it on the host, it will come off with tasks.

    # This checks if the syslog server is written on "SPLUNK" third-party syslog server

    # It will restart the syslog service if she sees that he has ceased to

    const = 'cannot write the journal. "

    If [/var/log/.vmsyslogd.err - e]; then

    /var/log/.vmsyslogd.err n 1 tail. grep "unable to write the log.

    If [$? = 0]; then

    echo '$const; Found in the LAST line, restart the syslog server.

    FI

    FI

    "I was going to cron to run every 15 minutes and if he saw the last line in the log that was stopped 'impossible to write the journal' so I would like to add a '.esxcli system syslog reload " inplace of the echo line.

    "I vCenter on a Windows machine and would like to run a scheduled task on all my hosts (perhaps a csv file) and then delivers."esxcli system syslog reload " if that is found.  I can't figure out how to do this, can anymore help me out?

    I'd like to use what I have, I don't have a vMA or splunks VM either.


    William Lam posted a script on how to do this on his blog site.  It is uses an alarm vCenter to alert in case of connectivity for the loghost is lost.

    virtuallyGhetto: detection of ESXi Remote Syslog connection error using a vCenter alarm

  • Problem to run the script from cron

    I have a script I want cron to run every day and it works fine if I connect at the VMAs and run it from the command line, but cron raise fair return "unavailable server version to... /usr/lib/perl5/5.10.0/VMware/VICommon.pm line 545. And the thing is, he has worked for cron too before update us the vMA to 5.0.0.1 build 643553.

    Any ideas how to solve this problem?

    Try adding

    {PERL_LWP_SSL_VERIFY_HOSTNAME} $ENV = 0;

    (old topic error: Server version not available at "https://1.1.1.1/sdk/vimService.wsdl" connecting to the virtual center with connect.pl)

  • Delete the files from/usr/lib/vmware/hostd/docroot/downloads with Powercli

    I'm tying to achieve the same goal as their example of cron in this KB with PowerCLI or c#:
    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 1026359

    We have tools that bring together our newspaper bundles ESX/ESXi and ship it to Vmware support for analysis, but the files never get removed from the * / downloads directory on the host.

    We have a mixed esx 3.5 / 4.0 to esx / esxi 4.1 environment. I prefer not to run local cronjobs on the hosts. Someone at - it a good way to handle this with PowerCLI or c# code against vCenter?

    Thank you!

    Jeff

    On the ESX servers, where you have a COS, you can use the command subsequently plink.exe alottt.

    There is an example in the HBA information: PowerCLI wire.

    On an ESXi server, you can use the same procedure, but it requires a bit more.

    And the environment that you are connecting is somewhat more limited compared to the COS.

    See the example of policies to change password on ESXi

  • ESXi 5 with ghetto and cron

    Hello

    I set my cron script to back up with ghetto, but I see an error in the system log.

    Cron configuration is:

    40 17 * /vmfs/volumes/4ee62529-1ca448a4-cbfb-0024e8624a07/lamw-ghettoVCB-518cef7/ghettoVCB.sh f/vmfs/volumes/datastoreBkup/lamw-ghettoVCB-518cef7/vms_to_backup >/vmfs/volumes/datastoreBkup/ghetto-bk-$(date +%Y-%m-%d).log)

    The error that appear in the syslog is:

    ~ # tail-f /var/log/syslog.log

    2012 01-06 T 10: 38:01Z crond [3858594]: don't have username root, analysis of 17 * /vmfs/volumes/4ee62529-1ca448a4-cbfb-0024e8624a07/lamw-ghettoVCB-518cef7/ghettoVCB.sh f/vmfs/volumes/datastoreBkup/lamw-ghettoVCB-518cef7/vms_to_backup >/vmfs/volumes/datastoreBkup/ghetto-bk-$(date +%Y-%m-%d).log)

    I have prove other binary in cron, for example I programe that make a file with touch. This command runs correcly to cron, but I can't do the same thing with the ghetto. Some ideas?

    Thank you.

    Hello

    If you take a look at the documentation - http://communities.vmware.com/docs/DOC-8760

    You will notice that the $(date...) command, you must escape the characters '&' in the cron entry. Just give it a try and see if it works, another way to confirm is simply remove the entry "$(date) for the time being to ensure that the actual cron can be set properly.»

  • cron on ESXi5 "operation not permitted".

    Hi all

    I have my configured script and works ok on my installation of ESXi5. But I'm hit a problem when I try to add the task that a weekly cron job. I'm editing ' / var/spool/cron/crontabs/root ' with vi. I add the necessary line, but when I try to save I get an error message indicating "operation no permittied." I checked the file permissions, are currently set to 644, with the file 'root' belonging to the root and the root owner group. Am I missing something?

    Note the information in the file.

    delete the file.

    Create the file again and put the old data back in copying it inside, now you are able to write the changes he...

    Fran: JohnsonChris [email protected]

    Skickat: den 17 oktober 2011 14:56

    Till: [email protected]

    Amne: New message: ' cron on ESXi5 "operation not permitted"

    Http://communities.vmware.com/index.jspaVMware communities >

    cron on ESXi5 "operation not permitted".

    created by JohnsonChrishttp://communities.vmware.com/people/JohnsonChris> ghettoVCB - view the discussion complete onhttp://communities.vmware.com/message/1846979#1846979

  • Cron job

    Hi guys,.

    I rose my whole environment up and running, I can manually create backups with ghettoVCBg2.pl and it works perfectly.

    The only thing I want to do now, is to do a cron job.

    I tested the function cron with the addition of a simple task in/etc/crontab:

    * / 1 * root wall test

    It works very well, every minute, I get the message 'test' in my console.

    I can also add this to/etc/crontab:

    * / 1 * root /home/vi-admin/test.sh

    I made the file test.sh, it is what is in the file:

    test wall

    still, I get the message 'test' in my console.

    Now the problem...

    When I edit/etc/crontab with this:

    * / 1 * root /home/vi-admin/ghettoVCBg2.pl--vmlist nag001 - dryrun 1

    Nothing runs, I als tried to put the nag001 commande./ghettoVCBg2.pl--vmlist - dryrun 1 into a file called test.sh... after that I have edited/etc/crontab with:

    * / 1 * root /home/vi-admin/test.sh

    Yet once, nothing does.

    When I look at "sudo nano/etc/crontab" I see:

    crond [12611]: module: pam_lsass (root) CMD (/ home/vi-admin/test/sh)

    crond [12611]: module: pam_lsass pam_sm_acct_mgmt doesn't have a login: root error code: 2

    crond [12611]: module: pam_lsass pam_sm_close_session error error code: 2

    BTW: I also tried to modify/etc/crontab with:

    * / 1 * /home/vi-admin/test.sh vi-admin

    and

    * / 1 * vi - admin /home/vi-admin/ghettoVCBg2.sh vmlist - nag001 - dryrun 1

    I have more knowledge here...

    TNX in advance

    Jim

    Apparently, I could repeat the problem and solve it

    Change backup.sh (script from cron for) in the following way

    #!/bin/bash
    #
    LD_LIBRARY_PATH=/opt/vmware/vma/lib64:/opt/vmware/vma/lib
    export LD_LIBRARY_PATH
    PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/java/jre-vmware/bin:/sbin:/usr/sbin:/home/vi-admin/bin
    export PATH
    cd /home/vi-admin/
    /home/vi-admin/ghettoVCBg2.pl --vmlist list --dryrun 1
    

    If running "sudo backup.sh" completes without error, you can insert call backup.sh in crontab

  • Need help with my cron-fu...

    Howdy.

    So here's my trouble.

    I've recently updated to the vMA 4.1 for my environment. I have several scripts inherited that I'm working on the conversion of pCLI when I have time, but for now, I need to get one of them running on the vMA. Part of the script called vmcontrol.pl for its operation.

    When I run the script as a user (vi-admin) or root, it works fine. When I put the scripts in a cron job, they fail. Logging of employment shows that in the environment of cron, the script fails to compile the perl files and complains to find a shared object file. My first thought is "ah, licences", but this does not hold true.

    I know it's something obvious and easy, but I'm just not collect the points. I tried to force the path variables in the cron, I checked this in several users and I made this exact same process before on the vMA 4.0 many times and it worked then. I know it's an environmental thing, but I have just enough Fu to understand.

    I am therefore asking for help. Anyone have any ideas? All entries (and I'm working on RTFM'ing, but I thought I would work this thread at the same time) are appreciated.

    Thank you!

    -Abe



    INTEGRITAS!

    Abe Lister

    Just a guy who likes to virtualize

    ==============================

    Won't lie. I like points. I mean, if something useful for you, consider slipping a few points for him!

    You are right, its probably the path is not properly set. I would recommend that you run "env" when you are root or vi-admin login and get the PATH variable and take what adds to/etc/crontab if you try to run as admin vi-. We make you run, you can understand what you actually need roads, but given that you use the vSphere SDK for Perl scripts, you want to make sure that capture you all the paths that reference the script.

    =========================================================================

    William Lam

    VMware vExpert 2009,2010

    VMware VCP3, 4

    VMware VCAP4-DCA

    VMware scripts and resources at: http://www.virtuallyghetto.com/

    Twitter: @lamw

    repository scripts vGhetto

    Introduction to the vMA (tips/tricks)

    Getting started with vSphere SDK for Perl

    VMware Code Central - Scripts/code samples for developers and administrators

    VMware developer community

    If you find this information useful, please give points to "correct" or "useful".

  • ghettoVCB script & ESXi cron

    After you have added:

    30 0 * /vmfs/volumes/4c52da67-d30f6128-5beb-000a5e3c7c28/scripts/backup/ghettoVCB.sh f/vmfs/volumes/4c52da67-d30f6128-5beb-000a5e3c7c28/scripts/backup/list_test g /vmfs/volumes/4c52da67-d30f6128-5beb-000a5e3c7c28/scripts/backup/ghettoVCB.conf & gt; /vmfs/volumes/4c52da67-d30f6128-5beb-000a5e3c7c28/scripts/backup/ghettoVCB-backup-$ (date +\%s).log)

    TO:

    / var/spool/cron/crontabs/root

    and then save the file, when I re - open the file, the work is gone.

    Help, please! already been struggling for hours: (!)

    Please take a look at the FAQ of cron because it describes the exact steps including how persist changes: ghettoVCB.sh - alternative free for virtual machine backup of for ESX (i) 3.5, 4.x & 5.x

    =========================================================================

    William Lam

    VMware vExpert 2009,2010

    VMware scripts and resources at: http://www.virtuallyghetto.com/

    Twitter: @lamw

    repository scripts vGhetto

    Introduction to the vMA (tips/tricks)

    Getting started with vSphere SDK for Perl

    VMware Code Central - Scripts/code samples for developers and administrators

    VMware developer community

    If you find this information useful, please give points to "correct" or "useful".

  • Best practices for automation of ghettoVCBg2 starting from cron

    Hello world!

    I set up an instance of vma for scheduling backups with ghettoVCBg2 in a SIN store. Everything works like a charm from the command line, I use vi fastpass for authentication, backups complete very well.

    However, I would like to invade the cron script and got stuck. Since vifp is designed to run only command line and as I read not supposed to work from a script, it seems that the only possibility would be to create a backup user dedicated with administrator privileges and store the user and pass in the shell script. I'm not happy to do so. I searched through the forums but couldn't ' find any simple solution.

    any IDE for best practices?

    Thank you

    eliott100

    In fact, incorrect. The script relies on the fact that the host ESX or ESXi are led by vi-fastpass... but when you run the script, it does not use vifpinit command to connect. It access credentials via the modules of vi-fastpass which don't vifpinit library but as you have noticed, you cannot run this utility in off-line mode. Therefore, it can be scheduled via cron, basically, but you must run the script interactively, just set up in your crontab. Please take a look at the documentation for more information

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    Twitter: @lamw

    repository scripts vGhetto

    Introduction to the vMA (tips/tricks)

    Getting started with vSphere SDK for Perl

    VMware Code Central - Scripts/code samples for developers and administrators

    VMware developer community

    If you find this information useful, please give points to "correct" or "useful".

  • How to install the cron tool

    Hi all

    I have a 4.0i VSphere client (build-171294).

    I need to install a cron like tool.

    I want to know how to install it. Can install it in the form of a RPM.

    Thank you and best regards,

    Vivek

    As mention by Dave and David,.

    What are you trying to program which requires cron? Remember, ESXi! = ESX, it doesn't have a Console of Service, where theres no concept of RPM, it isn't same RHEL

    If you walk down the path to unlock the busybox unsupported console, you can configure the cron... entries you do not need tools such as crontab if you understand how cron works and the entry for root cron which is stored under/var/spool/cron/root and this can be changed and you will have to do a special safeguard to make sure it persists restarts.

    I recommend establishing a scheduled task / cron outside the ESXi host if you run scripts premises/etc. in this unsupported environment, which allows greater flexibility and less mucking around in console busybox

    If you're just talking about general scheduled task, depending on whether you are using a version under license or not, you may be able to create schedule tasks by using the vSphere API through vSphere DSK for Perl, PowerCLI, or any other means by program/script.

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    Twitter: @lamw

    repository scripts vGhetto

    Introduction to the vMA (tips/tricks)

    Getting started with vSphere SDK for Perl

    VMware Code Central - Scripts/code samples for developers and administrators

    150 VMware developer

    ! http://engineering.ucsb.edu/~duonglt/vmware/vexpert_silver_icon.jpg!

    If you find this information useful, please give points to "correct" or "useful".

  • [URGENT] AJUDA VMware 2.0 parou!

    Pessoal preciso da ajuda você urgent!

    Tenhu um servidor linux Suse 11.1 com um VMware server 2.0 wont,.

    Detro not VMware I have a Windows 2003 server that e meu archives servidor e application.

    ISSO aqui na empresa contabilidade trabalho only.

    BOM o than acontece e não estavamos enviar pelo archives access

    E SINTEGRA nem para o n CONECTIVA da Caixa nem o Outlook funcionava.

    Pesquisei muito aqui no FLIGHT could fazer o Outlook funcionar e.

    To repent squid funcionar then parou o a VMware caiu... CAIU internet caiu tudo!

    Could fazer o squid voltar ao normal, mas a VMware nao quer funcionar but!

    Como não pelo browser I tried access could undergo a VM manualmente (# /etc/rc.d e depois o comando #. / vmware-autostart start)

    O programa of retornou o seguinte erro:

    VMware Server is installed, but it has not been configured (correctly)

    for the kernel running. To (re-) set, invoke the

    following command: usr/bin/vmware-config.pl.

    The European Union therefore me ter visto isso ha single 3 meses atraz quando EU estava

    Instalando o VMware e era por causa back modulo kernel... EU resolvi

    ISSO instalando o pacote 'Kernel-sources referring ao meu kernel.

    Ban Dai estou entendendo but nada!

    Estou com medo rodar o config.pl denovo e perder todos os meus archives windows server

    E ainda nao get voltar a virtual machine

    Alguem me ajuda por favor, fazer that nao sei o!

    Vou postar o log no meu sistema pra ver se ajuda ok

    SE alguem teve esse problema e its resolver por favor me ajude!

    Tenho as voltar o sistema urgent.

    -


    JOURNAL SYSTEM-


    June 16 08:18:43 cranio dhcpd: 00:21:97:82:af:ad via eth1 DHCPDISCOVER

    June 16 at 08:18:44 cranio dhcpd: DHCPOFFER on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    June 16 at 08:18:44 cranio dhcpd: DHCPREQUEST for 192.168.1.22 (192.168.1.1) of 00:21:97:82:af:ad (escrita) via eth1

    June 16 at 08:18:44 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    June 16 at 08:30:59 cranio dhcpd: 00:0e:a6:ad:97:c8 via eth1 DHCPDISCOVER

    June 16 at 08:31 cranio dhcpd: DHCPOFFER on 192.168.1.27 to 00:0e:a6:ad:97:c8 (escrita1) via eth1

    June 16 at 08:31 cranio dhcpd: DHCPREQUEST to 192.168.1.27 (192.168.1.1) of 00:0e:a6:ad:97:c8 (escrita1) via eth1

    June 16 at 08:31 cranio dhcpd: DHCPACK on 192.168.1.27 to 00:0e:a6:ad:97:c8 (escrita1) via eth1

    June 16 at 08:37:40 cranio dhcpd: DHCPREQUEST for 00:0e:a6:ad:97:c8 (escrita1) via eth1 192.168.1.27

    June 16 at 08:37:40 cranio dhcpd: DHCPACK on 192.168.1.27 to 00:0e:a6:ad:97:c8 (escrita1) via eth1

    June 16 08:52:16 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 08:52:19 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 08:54:42 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 08:55 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 08:55:01 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 08:55:07 cranio shadow [22697]: group already exists - group = pulse, by = 0

    June 16 at 08:55:07 cranio useradd [22698]: account - account exists = pulse, by = 0

    June 16 at 08:55:07 cranio shadow [22699]: group already exists - group = pulse-rt, by = 0

    June 16 at 08:55:07 cranio shadow [22700]: group already exists - group = pulse-access = 0

    June 16 at 08:55:16 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 08:55:19 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 08:55:22 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 08:55:23 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 08:55:23 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 08:55:24 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 08:55:24 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 08:56:37 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 08:56:40 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 09:06:54 cranio squid [6555]: State NETDB saved; 0 entries, 0 msec

    16 June at 09:15:21 cranio syslog-ng [2131]: log statistics;

    left =' pipe (/ dev/xconsole) = 0', fell =' pipe (/ dev/tty10) = 0'.

    processed = 'Centre (pending) = 1902', treated = 'Centre (received) = 1784'.

    treatment = "destination (newsnotice) = 0',

    treatment = 'destination (acpid) = 3', dealt with is '(firewall) destination = 0'.

    processed = "destination (null) = 3', treated = 'destination (mail) = 2',

    treatment = "destination (mailinfo) = 2',

    treatment = "destination (console) = 25',

    treatment = "destination (newserr) = 0',

    treatment = "destination (newscrit) = 0',

    treatment = "destination (messages) = 1776',

    treatment = "destination (mailwarn) = 0',

    treatment = "destination (localmessages) = 2',

    treatment = 'destination (netmgm) = 0', dealt with is 'destination (mailerr) = 0'.

    processed = 'destination (xconsole) = 25', treated = "destination (warn) = 64',

    treatment = 'source (src) = 1784'

    16 June at 09:15:42 cranio smartd [4175]: Device: / dev/sda , SMART

    The use attribute: 194 Temperature_Celsius changed from 114 to 113

    16 June at 09:23:07 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 09:23:09 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 09:23:13 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 09:23:14 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 09:23:14 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 09:23:15 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June-09:26:36 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 09:26:39 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 09:45:42 cranio smartd [4175]: Device: / dev/sda , SMART

    The use attribute: 194 Temperature_Celsius changed from 113 to 112

    June 16 09:52:57 cranio su: (to root) craniosacral on/dev/pts/2

    June 16 09:55:16 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 09:55:19 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 09:56:37 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 09:56:40 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 09:58:34 cranio squid [6555]: State NETDB saved; 0 entries, 0 msec

    16 June at 10:15:21 cranio syslog-ng [2131]: log statistics;

    left =' pipe (/ dev/xconsole) = 0', fell =' pipe (/ dev/tty10) = 0'.

    processed = 'Centre (pending) = 1919', treated = 'Centre (received) = 1801'.

    treatment = "destination (newsnotice) = 0',

    treatment = 'destination (acpid) = 3', dealt with is '(firewall) destination = 0'.

    processed = "destination (null) = 3', treated = 'destination (mail) = 2',

    treatment = "destination (mailinfo) = 2',

    treatment = "destination (console) = 25',

    treatment = "destination (newserr) = 0',

    treatment = "destination (newscrit) = 0',

    treatment = "destination (messages) = 1793',

    treatment = "destination (mailwarn) = 0',

    treatment = "destination (localmessages) = 2',

    treatment = 'destination (netmgm) = 0', dealt with is 'destination (mailerr) = 0'.

    processed = 'destination (xconsole) = 25', treated = "destination (warn) = 64',

    treatment = 'source (src) = 1801'

    16 June at 10:34:42 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June-10:43:45 cranio dhcpd: wrote 0 deleted host decls leases file.

    16 June-10:43:45 cranio dhcpd: wrote 0 new host Dynamics decls for the leases file.

    16 June-10:43:45 cranio dhcpd: 10 wrote leased to the leases file.

    16 June-10:43:45 cranio dhcpd: DHCPREQUEST to 00:08:54:b0:59:9 192.168.1.29's (CONTABILIDADE01) via eth1

    16 June-10:43:45 cranio dhcpd: DHCPACK on 192.168.1.29 to 00:08:54:b0:59:9 has (CONTABILIDADE01) via eth1

    June 16 at 10:44:16 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 10:44:19 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 10:45:33 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June-10:45:36 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 10:50:12 cranio su: cranio has NO SU (to root) on/dev/pts/4

    June 16 at 10:50:26 cranio su: (to root) craniosacral on/dev/pts/4

    16 June at 10:50:27 cranio squid [6555]: State NETDB saved; 0 entries, 0 msec

    16 June at 10:52:42 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June-10:52:45 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 10:54:07 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 10:54:10 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 11: cranio - MARK 14:10-

    June 16 at 11:15:21 cranio syslog-ng [2131]: log statistics;

    left =' pipe (/ dev/xconsole) = 0', fell =' pipe (/ dev/tty10) = 0'.

    processed = 'Centre (pending) = 1938', treated = 'Centre (received) = 1820'.

    treatment = "destination (newsnotice) = 0',

    treatment = 'destination (acpid) = 3', dealt with is '(firewall) destination = 0'.

    processed = "destination (null) = 3', treated = 'destination (mail) = 2',

    treatment = "destination (mailinfo) = 2',

    treatment = "destination (console) = 25',

    treatment = "destination (newserr) = 0',

    treatment = "destination (newscrit) = 0',

    treatment = "destination (messages) = 1812',

    treatment = "destination (mailwarn) = 0',

    treatment = "destination (localmessages) = 2',

    treatment = 'destination (netmgm) = 0', dealt with is 'destination (mailerr) = 0'.

    processed = 'destination (xconsole) = 25', treated = "destination (warn) = 64',

    treatment = 'source (src) = 1820'

    June 16 at 11:35:21 cranio - MARK-

    June 16 at 11:43:01 cranio squid [6555]: State NETDB saved; 0 entries, 0 msec

    June 16 at 11:43:58 cranio su: cranio has NO SU (to root) on/dev/pts/6

    June 16 at 11:44:10 cranio su: (to root) craniosacral on/dev/pts/6

    June 16 at 11:44:10 cranio su: (to root) craniosacral on/dev/pts/6

    June 16 at 11:44:16 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 11:44:17 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 11:44:18 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 11:44:18 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 11:44:32 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 11:52:43 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 11:52:46 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 11:54:04 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 11:54:07 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 12:10:06 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 12:10:10 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June-12:13:52 cranio dhcpd: wrote 0 deleted host decls leases file.

    16 June-12:13:52 cranio dhcpd: wrote 0 new host Dynamics decls for the leases file.

    16 June-12:13:52 cranio dhcpd: 10 wrote leased to the leases file.

    16 June-12:13:52 cranio dhcpd: DHCPREQUEST for 00:21:97:82:af:ad (escrita) via eth1 192.168.1.22

    16 June-12:13:52 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    16 June at 12:13:55 cranio dhcpd: DHCPREQUEST for 00:21:97:82:af:ad (escrita) via eth1 192.168.1.22

    16 June at 12:13:55 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    June 16 at 12:14:42 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    June 16 at 12:15:21 cranio syslog-ng [2131]: log statistics;

    left =' pipe (/ dev/xconsole) = 0', fell =' pipe (/ dev/tty10) = 0'.

    processed = 'Centre (pending) = 1963', treated = 'Centre (received) = 1845'.

    treatment = "destination (newsnotice) = 0',

    treatment = 'destination (acpid) = 3', dealt with is '(firewall) destination = 0'.

    processed = "destination (null) = 3', treated = 'destination (mail) = 2',

    treatment = "destination (mailinfo) = 2',

    treatment = "destination (console) = 25',

    treatment = "destination (newserr) = 0',

    treatment = "destination (newscrit) = 0',

    treatment = "destination (messages) = 1837',

    treatment = "destination (mailwarn) = 0',

    treatment = "destination (localmessages) = 2',

    treatment = 'destination (netmgm) = 0', dealt with is 'destination (mailerr) = 0'.

    processed = 'destination (xconsole) = 25', treated = "destination (warn) = 64',

    treatment = 'source (src) = 1845'

    June 16 at 12:15:42 cranio smartd [4175]: Device: / dev/sda , SMART

    The use attribute: 194 Temperature_Celsius increased from 112 to 113

    16 June-12:28:15 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June-12:28:15 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 12:28:44 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 12:28:44 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 12:46:38 cranio squid [6555]: State NETDB saved; 0 entries, 0 msec

    16 June at 12:51:38 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 12:52:43 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 12:52:46 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 12:54:04 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 12:54:07 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 13:06:20 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 13:06:20 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 13:06:29 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 13:06:29 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 13:15:21 cranio syslog-ng [2131]: log statistics;

    left =' pipe (/ dev/xconsole) = 0', fell =' pipe (/ dev/tty10) = 0'.

    processed = 'Centre (pending) = 1979', treated = 'Centre (received) = 1861'.

    treatment = "destination (newsnotice) = 0',

    treatment = 'destination (acpid) = 3', dealt with is '(firewall) destination = 0'.

    processed = "destination (null) = 3', treated = 'destination (mail) = 2',

    treatment = "destination (mailinfo) = 2',

    treatment = "destination (console) = 25',

    treatment = "destination (newserr) = 0',

    treatment = "destination (newscrit) = 0',

    treatment = "destination (messages) = 1853',

    treatment = "destination (mailwarn) = 0',

    treatment = "destination (localmessages) = 2',

    treatment = 'destination (netmgm) = 0', dealt with is 'destination (mailerr) = 0'.

    processed = 'destination (xconsole) = 25', treated = "destination (warn) = 64',

    treatment = 'source (src) = 1861'

    16 June at 13:28:08 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 13:28:08 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 13:30:02 cranio dhcpd: wrote 0 deleted host decls leases file.

    16 June at 13:30:02 cranio dhcpd: wrote 0 new host Dynamics decls for the leases file.

    16 June at 13:30:02 cranio dhcpd: 10 wrote leased to the leases file.

    16 June at 13:30:02 cranio dhcpd: DHCPREQUEST for 00:21:97:82:af:ad (escrita) via eth1 192.168.1.22

    16 June at 13:30:02 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    16 June at 13:30:06 cranio dhcpd: DHCPREQUEST for 00:21:97:82:af:ad (escrita) via eth1 192.168.1.22

    16 June at 13:30:06 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    16 June at 13:30:09 cranio dhcpd: DHCPREQUEST for 00:21:97:82:af:ad (escrita) via eth1 192.168.1.22

    16 June at 13:30:09 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    16 June at 13:50:09 cranio - MARK-

    16 June-13:52:43 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 13:52:47 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 13:54:04 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 13:54:08 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 13:54:42 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 14:01:20 cranio squid [6555]: State NETDB saved; 0 entries, 0 msec

    16 June at 14:10:05 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 14:10:05 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 14:14 cranio dhcpd: DHCPREQUEST to 192.168.1.26 00:07:95:5e:ef:c8 (recepçao) via eth1

    16 June at 14:14 cranio dhcpd: DHCPACK on 192.168.1.26 to 00:07:95:5e:ef:c8 (recepçao) via eth1

    16 June at 14:15:21 cranio syslog-ng [2131]: log statistics;

    left =' pipe (/ dev/xconsole) = 0', fell =' pipe (/ dev/tty10) = 0'.

    processed =' Center (pending) = 2002 ", treated = 'Center (received) is 1884',.

    treatment = "destination (newsnotice) = 0',

    treatment = 'destination (acpid) = 3', dealt with is '(firewall) destination = 0'.

    processed = "destination (null) = 3', treated = 'destination (mail) = 2',

    treatment = "destination (mailinfo) = 2',

    treatment = "destination (console) = 25',

    treatment = "destination (newserr) = 0',

    treatment = "destination (newscrit) = 0',

    treatment = "destination (messages) = 1876',

    treatment = "destination (mailwarn) = 0',

    treatment = "destination (localmessages) = 2',

    treatment = 'destination (netmgm) = 0', dealt with is 'destination (mailerr) = 0'.

    processed = 'destination (xconsole) = 25', treated = "destination (warn) = 64',

    treatment = 'source (src) = 1884'

    16 June at 14:23:30 cranio dhcpd: DHCPREQUEST for 00:21:97:82:af:ad (escrita) via eth1 192.168.1.22

    16 June at 14:23:30 cranio dhcpd: DHCPACK on 192.168.1.22 to 00:21:97:82:af:ad (escrita) via eth1

    16 June at 14:25:58 cranio squid [6555]: killing RunCache, pid 6553

    16 June at 14:25:58 cranio squid [6555]: preparation for the judgment after 158177 requests

    16 June at 14:25:58 cranio squid [6555]: wait 30 seconds for the active connections in the end

    16 June at 14:25:58 cranio squid [6555]: 11 FD closing HTTP connection

    16 June at 14:26:29 cranio squid [6555]: closing...

    16 June at 14:26:29 cranio squid [6555]: closing unlinkd hose on FD 12

    16 June at 14:26:29 cranio squid [6555]: storeDirWriteCleanLogs: from...

    16 June at 14:26:29 cranio squid [6555]: done. Wrote 25907 entries.

    16 June at 14:26:29 cranio squid [6555]: took 0.01 seconds (3569440.62 entries/s).

    16 June at 14:26:29 cranio squid [6555]: Squid Cache (Version 3.0.STABLE10): out normally.

    16 June at 14:26:32 cranio [25475] squid: Squid Parent: child process 25477 began

    16 June at 14:26:32 cranio [25477] squid: Squid Cache 3.0.STABLE10 for i686-suse-linux-gnu starting version...

    16 June at 14:26:32 cranio squid [25477]: process ID 25477

    16 June at 14:26:32 cranio squid [25477]: with 4096 of available file descriptors

    16 June at 14:26:32 cranio squid [25477]: DNS Socket created at 0.0.0.0, port 56155, FD 7

    16 June at 14:26:32 cranio squid [25477]: adding 200.175.182.139 of /etc/resolv.conf nameserver

    16 June at 14:26:32 cranio squid [25477]: adding 200.175.5.139 of /etc/resolv.conf nameserver

    16 June at 14:26:32 cranio squid [25477]: adding 10.1.1.1 /etc/resolv.conf nameserver

    16 June at 14:26:32 cranio squid [25477]: User-Agent logging is disabled.

    16 June at 14:26:32 cranio [25477] squid: Referer logging is disabled.

    16 June to 14:26:33 cranio squid [25477]: Unlinkd leads open on 12 FD

    16 June to 14:26:33 cranio squid [25477]: digest cache enabled Local; reconstruction/rewrite every 3600/3600 seconds

    16 June to 14:26:33 cranio squid [25477]: Swap maxSize 512000 KB, estimated 39384 objects

    16 June to 14:26:33 cranio squid [25477]: target the number of compartments: 1969

    16 June to 14:26:33 cranio squid [25477]: store of 8192 using buckets

    16 June to 14:26:33 cranio squid [25477]: size Max Mem: 409600 KB

    16 June to 14:26:33 cranio squid [25477]: size Max Swap: 512000 KB

    16 June to 14:26:33 cranio squid [25477]: Version 1 of the pagefile with detected LFS support...

    16 June to 14:26:33 cranio [25477] squid: reconstruction of storage to spool (CLEAN)

    16 June to 14:26:33 cranio squid [25477]: selection of dir for the store with charge less

    16 June to 14:26:33 cranio squid [25477]: current directory is /.

    16 June to 14:26:33 cranio squid [25477]: load the icons.

    16 June to 14:26:33 cranio squid [25477]: HTTP connections agreeing to 192.168.1.1 port 3128, DF 14.

    16 June to 14:26:33 cranio [25477] squid: HTPC disabled.

    16 June at 14:36:46 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 14:36:46 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 14:37:41 cranio dhcpd: wrote 0 deleted host decls leases file.

    16 June at 14:37:41 cranio dhcpd: wrote 0 new host Dynamics decls for the leases file.

    16 June at 14:37:41 cranio dhcpd: 10 wrote leased to the leases file.

    16 June at 14:37:41 cranio dhcpd: DHCPREQUEST for 00:0e:a6:ad:97:c8 (escrita1) via eth1 192.168.1.27

    16 June at 14:37:41 cranio dhcpd: DHCPACK on 192.168.1.27 to 00:0e:a6:ad:97:c8 (escrita1) via eth1

    16 June at 14:47:41 stop cranio [25616]: closing for the restart of the system

    16 June at 14:48:01 cranio init: switching to runlevel: 6

    16 June at 14:48:03 craniosacral core: bootsplash: State on console 0 passed to it

    16 June at 14:48:06 cranio smartd [4175]: signal smartd received 15: completed

    16 June at 14:48:06 cranio smartd [4175]: Device: / dev/sda , State

    Written at

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June at 14:48:06 cranio smartd [4175]: smartd fate (exit status 0)

    16 June at 14:48:08 cranio auditd [3510]: error sending request (operation not supported) signal_info

    16 June at 14:48:08 cranio auditd [3510]: the demon of the audit came out.

    16 June at 14:48:08 cranio avahi-daemon [16074]: Got SIGTERM, quit smoking.

    16 June at 14:48:08 cranio avahi-daemon [16074]: leaving mDNS multicast group on vmnet8 interface. IPv4 address 192.168.170.1.

    16 June at 14:48:08 cranio avahi-daemon [16074]: leaving mDNS group multicast on the vmnet1 interface. IPv4 address 192.168.78.1.

    16 June at 14:48:08 cranio avahi-daemon [16074]: leaving mDNS group multicast on the eth1 interface. IPv4 with the address 192.168.1.1.

    16 June at 14:48:08 cranio avahi-daemon [16074]: leaving mDNS group multicast on the eth0 interface. IPv4 address 10.1.1.2.

    16 June at 14:48:09 cranio sshd [4093]: received signal 15. closing.

    16 June at 14:48:09 craniosacral core: the promiscuous mode device eth1 left

    16 June at 14:48:09 craniosacral core: bridge-eth1: "Promiscuous" mode disabled

    16 June at 14:48:09 craniosacral core: / dev/vmnet: open called by PID 8280 (vmware-vmx)

    16 June at 14:48:09 craniosacral core: device eth1 has entered promiscuous mode

    16 June at 14:48:09 craniosacral core: bridge-eth1: the promiscuous mode

    16 June at 14:48:09 craniosacral core: / dev/vmnet: hub port 2 successfully opened

    16 June at 14:48:10 craniosacral core: the promiscuous mode device eth1 left

    16 June at 14:48:10 craniosacral core: bridge-eth1: "Promiscuous" mode disabled

    16 June at 14:48:10 cranio/usr/lib/vmware/bin/vmware-hostd [6271]: password accepted for the 127.0.0.1 root user

    16 June at 14:48:10 cranio webAccess watchdog: watchdog terminal with PID 6159

    16 June at 14:48:10 cranio watchdog-webAccess: Signal received: out the watchdog

    16 June at 14:48:11 cranio modprobe: FATAL: could not load /lib/modules/2.6.27.7-9-pae/modules.dep: no such file or directory

    16 June at 14:48:11 craniosacral core: vmmon: should deallocate locked 159213 pages of vm driver f3485400

    16 June at 14:48:11 craniosacral core: vmmon: should deallocate pages AWE 5186 vm driver f3485400

    16 June at 14:48:11 cranio/usr/lib/vmware/bin/vmware-hostd [6271]: password accepted for the 127.0.0.1 root user

    16 June at 14:48:11 cranio rpcbind: rpcbind ending signal. Restart with 'rpcbind w.

    16 June at 14:48:12 craniosacral core: core record (proc) stopped.

    16 June at 14:48:12 cranio kernel: kernel log demon fencing.

    16 June at 14:48:12 cranio syslog-ng [2131]: requested via the signal termination endpoint.

    16 June at 14:48:12 cranio syslog-ng [2131]: syslog-ng closing; version = '2.0.9'

    16 June at 14:49:22 cranio syslog-ng [2099]: syslog-ng, commissioning; version = '2.0.9'

    16 June at 14:49:22 cranio rchal: CPU frequency scaling is not supported by your processor.

    16 June at 14:49:22 cranio rchal: start with 'CPUFREQ = no' to avoid this warning.

    16 June at 14:49:22 cranio rchal: cannot load the cpufreq Governors - no driver available cpufreq

    16 June at 14:49:26 craniosacral core: klogd 1.4.1 source journal = / proc/kmsg began.

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174559.888:2):

    operation = "profile_load" name = "/ bin/ping" name2 = 'default' pid = 1880

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174559.936:3):

    operation = "profile_load" name = "/ sbin/klogd" name2 = 'default' pid = 1883

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.016:4):

    operation = "profile_load" name = "/ sbin/syslog-ng" name2 = 'default' pid = 1898

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.084:5):

    operation = "profile_load" name = "/ sbin/syslogd" name2 = 'default' pid = 1910

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.176:6):

    operation = "profile_load" name = "/ usr/sbin/avahi-daemon" Name2 = "default".

    PID = 1930

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.329:7):

    operation = "profile_load" name = "/ usr/sbin/identd" Name2 = "default".

    PID = 1932

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.416:8):

    operation = "profile_load" name = "/ usr/sbin/mdnsd" name2 = 'default' pid = 1933

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.636:9):

    operation = "profile_load" name = "/ usr/sbin/nscd" name2 = 'default' pid = 1960

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.729:10):

    operation = "profile_load" name = "/ usr/sbin/ntpd" name2 = 'default' pid = 1986

    16 June at 14:49:26 craniosacral core: type = 1505 audit(1245174560.792:11):

    operation = "profile_load" name = "/ usr/sbin/traceroute" Name2 = "default".

    PID = 1987

    16 June at 14:49:26 craniosacral core: nf_conntrack version 0.5.0 (16384 buckets, max 65536)

    16 June at 14:49:26 craniosacral core: CONFIG_NF_CT_ACCT is deprecated and will be removed soon. Plase use

    16 June at 14:49:26 craniosacral core: parameter nf_conntrack.acct = 1, acct = 1 option module nf_conntrack kernel or

    16 June at 14:49:26 craniosacral core: sysctl net.netfilter.nf_conntrack_acct = 1 to enable it.

    16 June at 14:49:26 craniosacral core: ip_tables: (C) 2000-2006 Netfilter Core Team

    16 June at 14:49:26 craniosacral core: powernow: this module only works with AMD K7 processors

    16 June at 14:49:27 cranio kdm_config [2467]: multiple occurrences of key

    "UseTheme" in the section of/usr/share/kde4/config/kdm/kdmrc

    16 June at 14:49:28 cranio ifup: lo

    16 June at 14:49:28 cranio ifup: lo

    16 June at 14:49:28 cranio ifup: IP address: 127.0.0.1/8

    16 June at 14:49:28 cranio ifup:

    16 June at 14:49:28 cranio ifup:

    16 June at 14:49:28 cranio ifup: IP address: 127.0.0.2/8

    16 June at 14:49:28 cranio ifup:

    16 June at 14:49:28 cranio ifup: device eth0: Intel Corporation 82572EI Gigabit controller Ethernet (copper) (rev 06)

    16 June at 14:49:29 cranio ifup: eth0

    16 June at 14:49:29 cranio ifup: IP address: 10.1.1.2/8

    16 June at 14:49:29 cranio ifup:

    16 June at 14:49:29 cranio ifup-route: error during execution:

    16 June at 14:49:29 cranio ifup-route: command ' replace Route 10.1.1.2/8 through 10.1.1.1 ip dev eth0' returned:

    16 June at 14:49:29 cranio ifup-route: RTNETLINK(7) answers: invalid argument

    16 June at 14:49:29 cranio ifup-route: Configuration line: 10.1.1.2 10.1.1.1 255.0.0.0 eth0

    16 June at 14:49:29 cranio kernel: vendor = peripheral 8086 = 244

    16 June at 14:49:29 craniosacral core: 0000:05:05.0 pci: PCI INT A - & gt; GSI 19 (low level) - & gt; IRQ 19

    16 June at 14:49:30 cranio SuSEfirewall2: non-active SuSEfirewall2

    16 June at 14:49:30 cranio ifup: device eth1: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express

    16 June at 14:49:30 cranio ifup: eth1

    16 June at 14:49:30 cranio ifup: IP address: 192.168.1.1/24

    16 June at 14:49:30 cranio ifup:

    16 June at 14:49:30 cranio ifup-route: error during execution:

    16 June at 14:49:30 cranio ifup-route: command "ip route replace 192.168.1.1/24 via 10.1.1.1 dev eth1' returned:"

    16 June at 14:49:30 cranio ifup-route: RTNETLINK(7) answers: invalid argument

    16 June at 14:49:30 cranio ifup-route: Configuration line: 192.168.1.1 10.1.1.1 255.255.255.0 eth1

    16 June at 14:49:30 cranio kernel: 0000:03:00.0: eth0: link is up to 10 Mbps Half Duplex, flow control: no

    16 June at 14:49:30 cranio kernel: 0000:03:00.0: eth0: speed 10/100: deactivation of OSI

    16 June at 14:49:30 cranio SuSEfirewall2: non-active SuSEfirewall2

    16 June at 14:49:31 cranio rpcbind: cannot create the socket for udp6

    16 June at 14:49:31 cranio rpcbind: cannot create the socket for tcp6

    16 June at 14:49:31 craniosacral core: tg3: eth1: connection to 100 Mbit/s, two-way.

    16 June at 14:49:31 craniosacral core: tg3: eth1: flow control is on for the TX and the RX.

    16 June at 14:49:39 cranio audispd: priority_boost_parser called with: 4

    16 June at 14:49:39 cranio audispd: initialized af_unix plugin

    16 June at 14:49:39 cranio audispd: audispd initialized with q_depth = 80 and 1 active plugins

    16 June at 14:49:39 cranio auditd [3503]: started Dispatcher: / sbin/audispd pid: 3508

    16 June at 14:49:39 cranio auditd [3503]: Init complete, auditd 1.7.7 listening events (start disabled state)

    16 June at 14:49:40 cranio avahi-daemon [3523]: find user 'avahi' (UID 103) and the group 'avahi' (GID 104).

    16 June at 14:49:40 cranio avahi-daemon [3523]: fallen successfully root privileges.

    16 June at 14:49:40 cranio avahi-daemon [3523]: avahi-daemon 0.6.23 commissioning.

    16 June at 14:49:40 cranio avahi-daemon [3523]: load the file etc/avahi/services/sftp-ssh.service service.

    16 June at 14:49:40 cranio avahi-daemon [3523]: load the file etc/avahi/services/ssh.service service.

    16 June at 14:49:40 cranio avahi-daemon [3523]: join mDNS multicast group on the eth1 interface. IPv4 with the address 192.168.1.1.

    16 June at 14:49:40 cranio avahi-daemon [3523]: new interface eth1 correspondent. IPv4 for mDNS.

    16 June at 14:49:40 cranio avahi-daemon [3523]: join mDNS multicast group on the eth0 interface. IPv4 address 10.1.1.2.

    16 June at 14:49:40 cranio avahi-daemon [3523]: new interface eth0 correspondent. IPv4 for mDNS.

    16 June at 14:49:40 cranio avahi-daemon [3523]: network enumeration interface is complete.

    16 June at 14:49:40 cranio avahi-daemon [3523]: registration new record address 192.168.1.1 on eth1. IPv4.

    16 June at 14:49:40 cranio avahi-daemon [3523]: registration new record address 10.1.1.2 on eth0. IPv4.

    16 June at 14:49:40 cranio avahi-daemon [3523]: no HINFO record with values 'I686' / 'LINUX '.

    16 June at 14:49:41 cranio avahi-daemon [3523]: full server startup.

    Host name is cranio.local. Local service is 915873236.

    16 June at 14:49:41 craniosacral core: CPU0 fixing NULL sched-field.

    16 June at 14:49:41 craniosacral core: CPU1 fixing NULL sched-field.

    16 June at 14:49:41 craniosacral core: CPU2 fixing NULL sched-field.

    16 June at 14:49:41 craniosacral core: CPU3 fixing NULL sched-field.

    16 June at 14:49:41 craniosacral core: CPU0 fixing sched-field:

    16 June at 14:49:41 craniosacral core: domain 0: level scale 0-1 MC

    16 June at 14:49:41 craniosacral core: groups: 0 1

    16 June at 14:49:41 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:49:41 craniosacral core: groups: 0 - 1-2-3

    16 June at 14:49:41 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:49:41 craniosacral core: groups: 0-3

    16 June at 14:49:41 craniosacral core: CPU1 fixing sched-field:

    16 June at 14:49:41 craniosacral core: domain 0: level scale 0-1 MC

    16 June at 14:49:41 craniosacral core: groups: 1 0

    16 June at 14:49:41 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:49:41 craniosacral core: groups: 0 - 1-2-3

    16 June at 14:49:41 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:49:41 craniosacral core: groups: 0-3

    16 June at 14:49:41 craniosacral core: CPU2 fixing sched-field:

    16 June at 14:49:41 craniosacral core: domain 0: span level 2-3 MC

    16 June at 14:49:41 craniosacral core: groups: 2 3

    16 June at 14:49:41 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:49:41 craniosacral core: groups: 2-3 0-1

    16 June at 14:49:41 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:49:41 craniosacral core: groups: 0-3

    16 June at 14:49:41 craniosacral core: CPU3 fixing sched-field:

    16 June at 14:49:41 craniosacral core: domain 0: span level 2-3 MC

    16 June at 14:49:41 craniosacral core: groups: 3 2

    16 June at 14:49:41 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:49:41 craniosacral core: groups: 2-3 0-1

    16 June at 14:49:41 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:49:41 craniosacral core: groups: 0-3

    16 June at 14:49:42 cranio avahi-daemon [3523]: Service "cranio" (/ etc/avahi/services/ssh.service) successfully established.

    16 June at 14:49:42 cranio avahi-daemon [3523]: Service 'SFTP File Transfer.

    on cranio"(/ etc/avahi/services/sftp-ssh.service) with success

    implemented.

    16 June at 14:49:43 craniosacral core: ppdev: user space parallel port driver

    16 June at 14:49:43 cranio smartd [3975]: smartd 5,39 2008-10-24 22:33

    (openSUSE RPM) Copyright (C) 2002-8 by Bruce

    Allen, http://smartmontools.sourceforge.net

    16 June at 14:49:43 cranio smartd [3975]: opens the /etc/smartd.conf file configuration

    16 June at 14:49:43 cranio smartd [3975]: Drive: DEVICESCAN, involved '-a ' Directive on line 26 of the /etc/smartd.conf file

    16 June at 14:49:43 cranio smartd [3975]: Configuration /etc/smartd.conf file has been scanned, found DEVICESCAN, scanning devices

    16 June at 14:49:43 cranio sshd [4085]: server listens on 0.0.0.0 port 22.

    16 June at 14:49:43 cranio smartd [3975]: Device: / dev/sda, modified type of "scsi" to "sat".

    16 June at 14:49:43 cranio smartd [3975]: Device: / dev/sda , opened

    16 June at 14:49:43 cranio smartd [3975]: Device: / dev/sda , in the smartd database.

    16 June at 14:49:44 cranio smartd [3975]: Device: / dev/sda , SMART may be able. Adding to the "monitor" list

    16 June at 14:49:44 cranio smartd [3975]: Device: / dev/sda , read state

    Of

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June at 14:49:44 cranio smartd [3975]: monitoring 1 ATA and 0 SCSI devices

    16 June at 14:49:45 cranio smartd [3975]: Device: / dev/sda , State

    Written at

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June at 14:49:45 cranio smartd [4146]: smartd a fork () ed in background mode. The PID = 4146.

    16 June at 14:49:45 cranio dhcpd: Internet Systems Consortium DHCP Server 3.1.1

    16 June at 14:49:45 cranio dhcpd: Copyright 2004 - 2008 Internet Systems Consortium.

    16 June at 14:49:45 cranio dhcpd: all rights reserved.

    16 June at 14:49:45 cranio dhcpd: for information, please visit http://www.isc.org/sw/dhcp/

    16 June at 14:49:45 cranio dhcpd: do not search LDAP from ldap-server,.

    the base ldap-dn and LDAP port were not specified in the config file

    16 June at 14:49:45 cranio dhcpd: Internet Systems Consortium DHCP Server 3.1.1

    16 June at 14:49:45 cranio dhcpd: Copyright 2004 - 2008 Internet Systems Consortium.

    16 June at 14:49:45 cranio dhcpd: all rights reserved.

    16 June at 14:49:45 cranio dhcpd: for information, please visit http://www.isc.org/sw/dhcp/

    16 June at 14:49:45 cranio dhcpd: do not search LDAP from ldap-server,.

    the base ldap-dn and LDAP port were not specified in the config file

    16 June at 14:49:45 cranio dhcpd: wrote 0 deleted host decls leases file.

    16 June at 14:49:45 cranio dhcpd: wrote 0 new host Dynamics decls for the leases file.

    16 June at 14:49:45 cranio dhcpd: 10 wrote leased to the leases file.

    16 June at 14:49:46 cranio kernel: NET: registered protocol family 17

    16 June at 14:49:46 cranio dhcpd: listening on LPF/eth1/00:1e:68:a9:bf:6d/192.168.1/24

    16 June at 14:49:46 cranio dhcpd: send on LPF/eth1/00:1e:68:a9:bf:6d/192.168.1/24

    16 June at 14:49:46 cranio dhcpd: sending on Socket/emergency/rescue-net

    16 June at 14:49:46 cranio/usr/sbin/cron [4252]: START (CRON) (V5.0)

    16 June at 14:49:47 craniosacral core: bootsplash: State on console 0 passed to it

    16 June at 14:49:47 cranio squid [4286]: Squid Parent: child process 4288 began

    16 June at 14:49:47 cranio squid [4288]: starting Squid Cache 3.0.STABLE10 for i686-suse-linux-gnu version...

    16 June at 14:49:47 cranio squid [4288]: process ID 4288

    16 June at 14:49:47 cranio squid [4288]: with 4096 of available file descriptors

    16 June at 14:49:47 cranio squid [4288]: DNS Socket created at 0.0.0.0, port 59063, FD 7

    16 June at 14:49:47 cranio squid [4288]: adding 200.175.182.139 of /etc/resolv.conf nameserver

    16 June at 14:49:47 cranio squid [4288]: adding 200.175.5.139 of /etc/resolv.conf nameserver

    16 June at 14:49:47 cranio squid [4288]: adding 10.1.1.1 /etc/resolv.conf nameserver

    16 June at 14:49:47 cranio squid [4288]: User-Agent logging is disabled.

    16 June at 14:49:47 cranio squid [4288]: Referer logging is disabled.

    16 June at 14:49:47 cranio squid [4288]: Unlinkd leads open on 12 FD

    16 June at 14:49:47 cranio squid [4288]: digest cache enabled Local; reconstruction/rewrite every 3600/3600 seconds

    16 June at 14:49:47 cranio squid [4288]: Swap maxSize 512000 KB, estimated 39384 objects

    16 June at 14:49:47 cranio squid [4288]: target the number of compartments: 1969

    16 June at 14:49:47 cranio squid [4288]: store of 8192 using buckets

    16 June at 14:49:47 cranio squid [4288]: size Max Mem: 409600 KB

    16 June at 14:49:47 cranio squid [4288]: size Max Swap: 512000 KB

    16 June at 14:49:48 cranio squid [4288]: Version 1 of the pagefile with detected LFS support...

    16 June at 14:49:48 cranio squid [4288]: reconstruction of storage to spool (DIRTY)

    16 June at 14:49:48 cranio squid [4288]: selection of dir for the store with charge less

    16 June at 14:49:48 cranio squid [4288]: current directory is /.

    16 June at 14:49:48 cranio squid [4288]: load the icons.

    16 June at 14:49:48 cranio squid [4288]: HTTP connections agreeing to 192.168.1.1 port 3128, DF 14.

    16 June at 14:49:48 cranio squid [4288]: HTPC disabled.

    16 June at 14:50:08 cranio pulseaudio [4408]: pid.c: overwhelming, stale PID file.

    16 June at 14:50:17 craniosacral core: CPU0 fixing NULL sched-field.

    16 June at 14:50:17 craniosacral core: CPU1 fixing NULL sched-field.

    16 June at 14:50:17 craniosacral core: CPU2 fixing NULL sched-field.

    16 June at 14:50:17 craniosacral core: CPU3 fixing NULL sched-field.

    16 June at 14:50:17 craniosacral core: CPU0 fixing sched-field:

    16 June at 14:50:17 craniosacral core: domain 0: level scale 0-1 MC

    16 June at 14:50:17 craniosacral core: groups: 0 1

    16 June at 14:50:17 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:50:17 craniosacral core: groups: 0 - 1-2-3

    16 June at 14:50:17 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:50:17 craniosacral core: groups: 0-3

    16 June at 14:50:17 craniosacral core: CPU1 fixing sched-field:

    16 June at 14:50:17 craniosacral core: domain 0: level scale 0-1 MC

    16 June at 14:50:17 craniosacral core: groups: 1 0

    16 June at 14:50:17 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:50:17 cranio python: hp-systray (init) [4400]: warning: no hp:

    or hpfax: found in any devices installed CUPS queue. On the way out.

    16 June at 14:50:17 craniosacral core: groups: 0 - 1-2-3

    16 June at 14:50:17 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:50:17 craniosacral core: groups: 0-3

    16 June at 14:50:17 craniosacral core: CPU2 fixing sched-field:

    16 June at 14:50:17 craniosacral core: domain 0: span level 2-3 MC

    16 June at 14:50:17 craniosacral core: groups: 2 3

    16 June at 14:50:17 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:50:17 craniosacral core: groups: 2-3 0-1

    16 June at 14:50:17 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:50:17 craniosacral core: groups: 0-3

    16 June at 14:50:17 craniosacral core: CPU3 fixing sched-field:

    16 June at 14:50:17 craniosacral core: domain 0: span level 2-3 MC

    16 June at 14:50:17 craniosacral core: groups: 3 2

    16 June at 14:50:17 craniosacral core: area 1: CPU level scale 0-3

    16 June at 14:50:17 craniosacral core: groups: 2-3 0-1

    16 June at 14:50:17 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 14:50:21 craniosacral core: groups: 0-3

    16 June at 14:52:44 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 14:52:44 cranio dhcpd: If this DHCP server is authoritative for this subnet.

    16 June at 14:52:44 cranio dhcpd: Please write a 'do authority;' directive is in the

    16 June at 14:52:44 cranio dhcpd: statement of subnet or within a scope that surrounds the

    16 June at 14:52:44 cranio dhcpd: subnet statement--for example, write it up

    16 June at 14:52:44 dhcpd cranio: the dhcpd.conf file.

    16 June at 14:52:49 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 14:53:21 cranio polkit-grant-helper [4739]: permission granted

    org.freedesktop.packagekit.system - update to the MIP 4410

    16 June at 14:54:05 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    16 June at 14:54:08 cranio dhcpd: 192.168.1.29 via eth1 DHCPINFORM: no authority for the 192.168.1.0 subnet

    June 16 at 14:56:23 cranio su: (to root) craniosacral on/dev/pts/2

    16 June at 15:00:01 cranio squid [4983]: Squid Parent: child process 4985 began

    16 June at 15:00:01 cranio squid [4985]: starting Squid Cache 3.0.STABLE10 for i686-suse-linux-gnu version...

    16 June at 15:00:01 cranio squid [4985]: process ID 4985

    16 June at 15:00:01 cranio squid [4985]: with 4096 of available file descriptors

    16 June at 15:00:01 cranio squid [4985]: DNS Socket created at 0.0.0.0, port 60274, FD 7

    16 June at 15:00:01 cranio squid [4985]: adding 200.175.182.139 of /etc/resolv.conf nameserver

    16 June at 15:00:01 cranio squid [4985]: adding 200.175.5.139 of /etc/resolv.conf nameserver

    16 June at 15:00:01 cranio squid [4985]: adding 10.1.1.1 /etc/resolv.conf nameserver

    16 June at 15:00:01 cranio squid [4985]: User-Agent logging is disabled.

    16 June at 15:00:01 cranio squid [4985]: Referer logging is disabled.

    16 June at 15:00:01 cranio squid [4985]: Unlinkd leads open on 12 FD

    16 June at 15:00:01 cranio squid [4985]: digest cache enabled Local; reconstruction/rewrite every 3600/3600 seconds

    16 June at 15:00:01 cranio squid [4985]: Swap maxSize 512000 KB, estimated 39384 objects

    16 June at 15:00:01 cranio squid [4985]: target the number of compartments: 1969

    16 June at 15:00:01 cranio squid [4985]: store of 8192 using buckets

    16 June at 15:00:01 cranio squid [4985]: size Max Mem: 409600 KB

    16 June at 15:00:01 cranio squid [4985]: size Max Swap: 512000 KB

    16 June at 15:00:01 cranio squid [4985]: Version 1 of the pagefile with detected LFS support...

    16 June at 15:00:01 cranio squid [4985]: reconstruction of storage to spool (DIRTY)

    16 June at 15:00:01 cranio squid [4985]: selection of dir for the store with charge less

    16 June at 15:00:01 cranio squid [4985]: current directory is /.

    16 June at 15:00:01 cranio squid [4985]: load the icons.

    16 June at 15:00:01 cranio squid [4985]: HTTP connections agreeing to 192.168.1.1 port 3128, DF 14.

    16 June at 15:00:01 cranio squid [4985]: HTPC disabled.

    16 June-15:04:51 cranio squid [5156]: Squid Parent: child process 5158 began

    16 June-15:04:51 cranio squid [5158]: starting Squid Cache 3.0.STABLE10 for i686-suse-linux-gnu version...

    16 June-15:04:51 cranio squid [5158]: process ID 5158

    16 June-15:04:51 cranio squid [5158]: with 4096 of available file descriptors

    16 June-15:04:51 cranio squid [5158]: DNS Socket created at 0.0.0.0, port 58420, FD 7

    16 June-15:04:51 cranio squid [5158]: adding 200.175.182.139 of /etc/resolv.conf nameserver

    16 June-15:04:51 cranio squid [5158]: adding 200.175.5.139 of /etc/resolv.conf nameserver

    16 June-15:04:51 cranio squid [5158]: adding 10.1.1.1 /etc/resolv.conf nameserver

    16 June-15:04:51 cranio squid [5158]: User-Agent logging is disabled.

    16 June-15:04:51 cranio squid [5158]: Referer logging is disabled.

    16 June-15:04:51 cranio squid [5158]: Unlinkd leads open on 12 FD

    16 June-15:04:51 cranio squid [5158]: digest cache enabled Local; reconstruction/rewrite every 3600/3600 seconds

    16 June-15:04:51 cranio squid [5158]: Swap maxSize 512000 KB, estimated 39384 objects

    16 June-15:04:51 cranio squid [5158]: target the number of compartments: 1969

    16 June-15:04:51 cranio squid [5158]: store of 8192 using buckets

    16 June-15:04:51 cranio squid [5158]: size Max Mem: 409600 KB

    16 June-15:04:51 cranio squid [5158]: size Max Swap: 512000 KB

    16 June-15:04:51 cranio squid [5158]: Version 1 of the pagefile with detected LFS support...

    16 June-15:04:51 cranio squid [5158]: reconstruction of storage to spool (DIRTY)

    16 June-15:04:51 cranio squid [5158]: selection of dir for the store with charge less

    16 June-15:04:51 cranio squid [5158]: current directory is /.

    16 June-15:04:51 cranio squid [5158]: load the icons.

    16 June-15:04:51 cranio squid [5158]: HTTP connections agreeing to 192.168.1.1 port 3128, DF 14.

    16 June-15:04:51 cranio squid [5158]: HTPC disabled.

    16 June-15:07:27 cranio shutdown [5209]: closing for the restart of the system

    16 June-15:07:28 cranio init: switching to runlevel: 6

    16 June-15:07:30 craniosacral core: bootsplash: State on console 0 passed to it

    16 June-15:07:30 cranio sshd [4085]: received signal 15. closing.

    16 June-15:07:30 cranio avahi-daemon [3523]: Got SIGTERM, quit smoking.

    16 June-15:07:30 cranio avahi-daemon [3523]: leaving mDNS group multicast on the eth1 interface. IPv4 with the address 192.168.1.1.

    16 June-15:07:30 cranio avahi-daemon [3523]: leaving mDNS group multicast on the eth0 interface. IPv4 address 10.1.1.2.

    16 June-15:07:30 cranio smartd [4146]: signal smartd received 15: completed

    16 June-15:07:30 cranio smartd [4146]: Device: / dev/sda , State

    Written at

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June-15:07:30 cranio smartd [4146]: smartd fate (exit status 0)

    16 June-15:07:30 cranio auditd [3503]: error sending request (operation not supported) signal_info

    16 June-15:07:30 cranio auditd [3503]: the demon of the audit came out.

    16 June-15:07:30 cranio watchdog-webAccess: PID file not found /var/run/vmware/watchdog-webAccess.PID

    16 June-15:07:30 cranio watchdog-webAccess: unable to put an end to watchdog: can not find the process

    16 June at 15:07:32 cranio rpcbind: rpcbind ending signal. Restart with 'rpcbind w.

    16 June-15:07:33 craniosacral core: core record (proc) stopped.

    16 June-15:07:33 cranio kernel: kernel log demon fencing.

    16 June at 15:07:33 cranio syslog-ng [2099]: requested via the signal termination endpoint.

    16 June-15:07:33 cranio syslog-ng [2099]: syslog-ng closing; version = '2.0.9'

    16 June-15:08:34 cranio syslog-ng [2043]: syslog-ng, commissioning; version = '2.0.9'

    16 June at 15:08:35 cranio rchal: CPU frequency scaling is not supported by your processor.

    16 June at 15:08:35 cranio rchal: start with 'CPUFREQ = no' to avoid this warning.

    16 June at 15:08:35 cranio rchal: cannot load the cpufreq Governors - no driver available cpufreq

    16 June-15:08:39 craniosacral core: klogd 1.4.1 source journal = / proc/kmsg began.

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175712.624:2):

    operation = "profile_load" name = "/ bin/ping" name2 = 'default' pid = 1820

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175712.660:3):

    operation = "profile_load" name = "/ sbin/klogd" name2 = 'default' pid = 1821

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175712.729:4):

    operation = "profile_load" name = "/ sbin/syslog-ng" name2 = 'default' pid = 1823

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175712.805:5):

    operation = "profile_load" name = "/ sbin/syslogd" name2 = 'default' pid = 1841

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175712.877:6):

    operation = "profile_load" name = "/ usr/sbin/avahi-daemon" Name2 = "default".

    PID = 1861

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175712.969:7):

    operation = "profile_load" name = "/ usr/sbin/identd" Name2 = "default".

    PID = 1894

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175713.105:8):

    operation = "profile_load" name = "/ usr/sbin/mdnsd" name2 = 'default' pid = 1912

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175713.176:9):

    operation = "profile_load" name = "/ usr/sbin/nscd" name2 = 'default' pid = 1913

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175713.268:10):

    operation = "profile_load" name = "/ usr/sbin/ntpd" name2 = 'default' pid = 1914

    16 June-15:08:39 craniosacral core: type = 1505 audit(1245175713.328:11):

    operation = "profile_load" name = "/ usr/sbin/traceroute" Name2 = "default".

    PID = 1915

    16 June-15:08:39 craniosacral core: nf_conntrack version 0.5.0 (16384 buckets, max 65536)

    16 June-15:08:39 craniosacral core: CONFIG_NF_CT_ACCT is deprecated and will be removed soon. Plase use

    16 June-15:08:39 craniosacral core: parameter nf_conntrack.acct = 1, acct = 1 option module nf_conntrack kernel or

    16 June-15:08:39 craniosacral core: sysctl net.netfilter.nf_conntrack_acct = 1 to enable it.

    16 June-15:08:39 craniosacral core: ip_tables: (C) 2000-2006 Netfilter Core Team

    16 June-15:08:39 craniosacral core: powernow: this module only works with AMD K7 processors

    16 June-15:08:40 cranio kdm_config [2395]: multiple occurrences of key

    "UseTheme" in the section of/usr/share/kde4/config/kdm/kdmrc

    16 June-15:08:40 cranio ifup: lo

    16 June-15:08:40 cranio ifup: lo

    16 June-15:08:40 cranio ifup: IP address: 127.0.0.1/8

    June 16 at 15:08:40 cranio ifup:

    June 16 at 15:08:40 cranio ifup:

    16 June-15:08:40 cranio ifup: IP address: 127.0.0.2/8

    June 16 at 15:08:40 cranio ifup:

    16 June-15:08:40 cranio ifup: device eth0: Intel Corporation 82572EI Gigabit controller Ethernet (copper) (rev 06)

    16 June-15:08:40 cranio ifup: eth0

    16 June-15:08:40 cranio ifup: IP address: 10.1.1.2/8

    June 16 at 15:08:40 cranio ifup:

    16 June at 15:08:41 cranio ifup-route: error during execution:

    16 June at 15:08:41 cranio ifup-route: command ' replace Route 10.1.1.2/8 through 10.1.1.1 ip dev eth0' returned:

    16 June at 15:08:41 cranio ifup-route: RTNETLINK(7) answers: invalid argument

    16 June at 15:08:41 cranio ifup-route: Configuration line: 10.1.1.2 10.1.1.1 255.0.0.0 eth0

    16 June at 15:08:41 cranio SuSEfirewall2: non-active SuSEfirewall2

    16 June-15:08:42 cranio ifup: device eth1: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express

    16 June-15:08:42 cranio ifup: eth1

    16 June-15:08:42 cranio ifup: IP address: 192.168.1.1/24

    June 16 at 15:08:42 cranio ifup:

    16 June-15:08:42 cranio kernel: vendor = peripheral 8086 = 244

    16 June-15:08:42 craniosacral core: 0000:05:05.0 pci: PCI INT A - & gt; GSI 19 (low level) - & gt; IRQ 19

    16 June-15:08:42 cranio ifup-route: error during execution:

    16 June-15:08:42 cranio ifup-route: command "ip route replace 192.168.1.1/24 via 10.1.1.1 dev eth1' returned:"

    16 June-15:08:42 cranio ifup-route: RTNETLINK(7) answers: invalid argument

    16 June-15:08:42 cranio ifup-route: Configuration line: 192.168.1.1 10.1.1.1 255.255.255.0 eth1

    16 June-15:08:42 cranio kernel: 0000:03:00.0: eth0: link is up to 10 Mbps Half Duplex, flow control: no

    16 June-15:08:43 cranio kernel: 0000:03:00.0: eth0: speed 10/100: deactivation of OSI

    16 June-15:08:42 cranio SuSEfirewall2: non-active SuSEfirewall2

    16 June-15:08:43 craniosacral core: tg3: eth1: connection to 100 Mbit/s, two-way.

    16 June-15:08:43 craniosacral core: tg3: eth1: flow control is on for the TX and the RX.

    16 June-15:08:43 cranio rpcbind: cannot create the socket for udp6

    16 June-15:08:43 cranio rpcbind: cannot create the socket for tcp6

    16 June-15:08:51 cranio auditd [3428]: started Dispatcher: / sbin/audispd pid: 3433

    16 June-15:08:51 cranio audispd: priority_boost_parser called with: 4

    16 June-15:08:51 cranio audispd: initialized af_unix plugin

    16 June-15:08:51 cranio audispd: audispd initialized with q_depth = 80 and 1 active plugins

    16 June-15:08:51 cranio auditd [3428]: Init complete, auditd 1.7.7 listening events (start disabled state)

    16 June at 15:08:51 cranio avahi-daemon [3447]: find user 'avahi' (UID 103) and the group 'avahi' (GID 104).

    16 June-15:08:51 cranio avahi-daemon [3447]: fallen successfully root privileges.

    16 June-15:08:51 cranio avahi-daemon [3447]: avahi-daemon 0.6.23 commissioning.

    16 June-15:08:51 cranio avahi-daemon [3447]: load the file etc/avahi/services/sftp-ssh.service service.

    16 June-15:08:51 cranio avahi-daemon [3447]: load the file etc/avahi/services/ssh.service service.

    16 June-15:08:51 cranio avahi-daemon [3447]: join mDNS multicast group on the eth1 interface. IPv4 with the address 192.168.1.1.

    16 June-15:08:51 cranio avahi-daemon [3447]: new interface eth1 correspondent. IPv4 for mDNS.

    16 June-15:08:51 cranio avahi-daemon [3447]: join mDNS multicast group on the eth0 interface. IPv4 address 10.1.1.2.

    16 June-15:08:51 cranio avahi-daemon [3447]: new interface eth0 correspondent. IPv4 for mDNS.

    16 June-15:08:51 cranio avahi-daemon [3447]: network enumeration interface is complete.

    16 June-15:08:51 cranio avahi-daemon [3447]: registration new record address 192.168.1.1 on eth1. IPv4.

    16 June-15:08:51 cranio avahi-daemon [3447]: registration new record address 10.1.1.2 on eth0. IPv4.

    16 June-15:08:51 cranio avahi-daemon [3447]: no HINFO record with values 'I686' / 'LINUX '.

    16 June-15:08:52 cranio avahi-daemon [3447]: full server startup.

    Host name is cranio.local. Local service is 1810577979.

    16 June-15:08:53 cranio avahi-daemon [3447]: Service "cranio" (/ etc/avahi/services/ssh.service) successfully established.

    16 June-15:08:53 cranio avahi-daemon [3447]: Service 'SFTP File Transfer.

    on cranio"(/ etc/avahi/services/sftp-ssh.service) with success

    implemented.

    16 June at 15:08:55 craniosacral core: CPU0 fixing NULL sched-field.

    16 June at 15:08:55 craniosacral core: CPU1 fixing NULL sched-field.

    16 June at 15:08:55 craniosacral core: CPU2 fixing NULL sched-field.

    16 June at 15:08:55 craniosacral core: CPU3 fixing NULL sched-field.

    16 June at 15:08:55 craniosacral core: CPU0 fixing sched-field:

    16 June at 15:08:55 craniosacral core: domain 0: level scale 0-1 MC

    16 June at 15:08:55 craniosacral core: groups: 0 1

    16 June at 15:08:55 craniosacral core: area 1: CPU level scale 0-3

    16 June at 15:08:55 craniosacral core: groups: 0 - 1-2-3

    16 June at 15:08:55 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 15:08:55 craniosacral core: groups: 0-3

    16 June at 15:08:55 craniosacral core: CPU1 fixing sched-field:

    16 June at 15:08:55 craniosacral core: domain 0: level scale 0-1 MC

    16 June at 15:08:55 craniosacral core: groups: 1 0

    16 June at 15:08:55 craniosacral core: area 1: CPU level scale 0-3

    16 June at 15:08:55 craniosacral core: groups: 0 - 1-2-3

    16 June at 15:08:55 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 15:08:55 craniosacral core: groups: 0-3

    16 June at 15:08:55 craniosacral core: CPU2 fixing sched-field:

    16 June at 15:08:55 craniosacral core: domain 0: span level 2-3 MC

    16 June at 15:08:55 craniosacral core: groups: 2 3

    16 June at 15:08:55 craniosacral core: area 1: CPU level scale 0-3

    16 June at 15:08:55 craniosacral core: groups: 2-3 0-1

    16 June at 15:08:55 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 15:08:55 craniosacral core: groups: 0-3

    16 June at 15:08:55 craniosacral core: CPU3 fixing sched-field:

    16 June at 15:08:55 craniosacral core: domain 0: span level 2-3 MC

    16 June at 15:08:55 craniosacral core: groups: 3 2

    16 June at 15:08:55 craniosacral core: area 1: CPU level scale 0-3

    16 June at 15:08:55 craniosacral core: groups: 2-3 0-1

    16 June at 15:08:55 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June at 15:08:55 craniosacral core: groups: 0-3

    16 June at 15:08:55 cranio dhcpd: Internet Systems Consortium DHCP Server 3.1.1

    16 June at 15:08:55 cranio dhcpd: Copyright 2004 - 2008 Internet Systems Consortium.

    16 June at 15:08:55 cranio dhcpd: all rights reserved.

    16 June at 15:08:55 cranio dhcpd: for information, please visit http://www.isc.org/sw/dhcp/

    16 June at 15:08:55 cranio dhcpd: do not search LDAP from ldap-server,.

    the base ldap-dn and LDAP port were not specified in the config file

    16 June at 15:08:55 cranio dhcpd: Internet Systems Consortium DHCP Server 3.1.1

    16 June at 15:08:55 cranio dhcpd: Copyright 2004 - 2008 Internet Systems Consortium.

    16 June at 15:08:55 cranio dhcpd: all rights reserved.

    16 June at 15:08:55 cranio dhcpd: for information, please visit http://www.isc.org/sw/dhcp/

    16 June at 15:08:55 cranio dhcpd: do not search LDAP from ldap-server,.

    the base ldap-dn and LDAP port were not specified in the config file

    16 June at 15:08:55 cranio dhcpd: wrote 0 deleted host decls leases file.

    16 June at 15:08:55 cranio dhcpd: wrote 0 new host Dynamics decls for the leases file.

    16 June at 15:08:55 cranio dhcpd: 10 wrote leased to the leases file.

    16 June-15:08:56 cranio kernel: NET: registered protocol family 17

    16 June-15:08:56 cranio dhcpd: listening on LPF/eth1/00:1e:68:a9:bf:6d/192.168.1/24

    16 June-15:08:56 cranio dhcpd: send on LPF/eth1/00:1e:68:a9:bf:6d/192.168.1/24

    16 June-15:08:56 cranio dhcpd: sending on Socket/emergency/rescue-net

    16 June-15:08:57 cranio/usr/sbin/cron [4033]: START (CRON) (V5.0)

    16 June-15:08:57 cranio smartd [4047]: smartd 5,39 2008-10-24 22:33

    (openSUSE RPM) Copyright (C) 2002-8 by Bruce

    Allen, http://smartmontools.sourceforge.net

    16 June-15:08:57 cranio smartd [4047]: opens the /etc/smartd.conf file configuration

    16 June-15:08:57 cranio smartd [4047]: Drive: DEVICESCAN, involved '-a ' Directive on line 26 of the /etc/smartd.conf file

    16 June at 15:08:57 cranio smartd [4047]: Configuration /etc/smartd.conf file has been scanned, found DEVICESCAN, scanning devices

    16 June-15:08:57 cranio smartd [4047]: Device: / dev/sda, modified type of "scsi" to "sat".

    16 June-15:08:57 cranio smartd [4047]: Device: / dev/sda , opened

    16 June-15:08:57 cranio smartd [4047]: Device: / dev/sda , in the smartd database.

    16 June-15:08:58 cranio smartd [4047]: Device: / dev/sda , SMART may be able. Adding to the "monitor" list

    16 June-15:08:58 cranio smartd [4047]: Device: / dev/sda , read state

    Of

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June-15:08:58 cranio smartd [4047]: monitoring 1 ATA and 0 SCSI devices

    16 June-15:08:58 cranio smartd [4047]: Device: / dev/sda , State

    Written at

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June-15:08:58 cranio smartd [4065]: smartd a fork () ed in background mode. The PID = 4065.

    16 June to 15:09 cranio sshd [4126]: server listens on 0.0.0.0 port 22.

    16 June at 15:09:01 craniosacral core: bootsplash: State on console 0 passed to it

    16 June-15:09:02 cranio squid [4150]: Squid Parent: child process 4152 began

    16 June-15:09:02 cranio squid [4152]: starting Squid Cache 3.0.STABLE10 for i686-suse-linux-gnu version...

    16 June-15:09:02 cranio squid [4152]: process ID 4152

    16 June-15:09:02 cranio squid [4152]: with 4096 of available file descriptors

    16 June-15:09:02 cranio squid [4152]: DNS Socket created at 0.0.0.0, port 57243, FD 7

    16 June-15:09:02 cranio squid [4152]: adding 200.175.182.139 of /etc/resolv.conf nameserver

    16 June-15:09:02 cranio squid [4152]: adding 200.175.5.139 of /etc/resolv.conf nameserver

    16 June-15:09:02 cranio squid [4152]: adding 10.1.1.1 /etc/resolv.conf nameserver

    16 June-15:09:02 cranio squid [4152]: User-Agent logging is disabled.

    16 June-15:09:02 cranio squid [4152]: Referer logging is disabled.

    16 June-15:09:02 cranio squid [4152]: Unlinkd leads open on 12 FD

    16 June-15:09:02 cranio squid [4152]: digest cache enabled Local; reconstruction/rewrite every 3600/3600 seconds

    16 June-15:09:02 cranio squid [4152]: Swap maxSize 512000 KB, estimated 39384 objects

    16 June-15:09:02 cranio squid [4152]: target the number of compartments: 1969

    16 June-15:09:02 cranio squid [4152]: store of 8192 using buckets

    16 June-15:09:02 cranio squid [4152]: size Max Mem: 409600 KB

    16 June-15:09:02 cranio squid [4152]: size Max Swap: 512000 KB

    16 June-15:09:02 cranio squid [4152]: Version 1 of the pagefile with detected LFS support...

    16 June-15:09:02 cranio squid [4152]: reconstruction of storage to spool (DIRTY)

    16 June-15:09:02 cranio squid [4152]: selection of dir for the store with charge less

    16 June-15:09:02 cranio squid [4152]: current directory is /.

    16 June-15:09:02 cranio squid [4152]: load the icons.

    16 June-15:09:02 cranio squid [4152]: HTTP connections agreeing to 192.168.1.1 port 3128, DF 14.

    16 June-15:09:02 cranio squid [4152]: HTPC disabled.

    16 June-15:09:21 cranio pulseaudio [4270]: pid.c: overwhelming, stale PID file.

    16 June-15:09:24 craniosacral core: CPU0 fixing NULL sched-field.

    16 June-15:09:24 craniosacral core: CPU1 fixing NULL sched-field.

    16 June-15:09:24 craniosacral core: CPU2 fixing NULL sched-field.

    16 June-15:09:24 craniosacral core: CPU3 fixing NULL sched-field.

    16 June-15:09:24 craniosacral core: CPU0 fixing sched-field:

    16 June-15:09:24 craniosacral core: domain 0: level scale 0-1 MC

    16 June-15:09:24 craniosacral core: groups: 0 1

    16 June-15:09:24 craniosacral core: area 1: CPU level scale 0-3

    16 June-15:09:24 craniosacral core: groups: 0 - 1-2-3

    16 June-15:09:24 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June-15:09:24 craniosacral core: groups: 0-3

    16 June-15:09:24 craniosacral core: CPU1 fixing sched-field:

    16 June-15:09:24 craniosacral core: domain 0: level scale 0-1 MC

    16 June-15:09:24 craniosacral core: groups: 1 0

    16 June-15:09:24 craniosacral core: area 1: CPU level scale 0-3

    16 June-15:09:24 craniosacral core: groups: 0 - 1-2-3

    16 June-15:09:24 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June-15:09:24 craniosacral core: groups: 0-3

    16 June-15:09:24 craniosacral core: CPU2 fixing sched-field:

    16 June-15:09:24 craniosacral core: domain 0: span level 2-3 MC

    16 June-15:09:24 craniosacral core: groups: 2 3

    16 June-15:09:24 craniosacral core: area 1: CPU level scale 0-3

    16 June-15:09:24 craniosacral core: groups: 2-3 0-1

    16 June-15:09:24 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June-15:09:24 craniosacral core: groups: 0-3

    16 June-15:09:24 craniosacral core: CPU3 fixing sched-field:

    16 June-15:09:24 craniosacral core: domain 0: span level 2-3 MC

    16 June-15:09:24 craniosacral core: groups: 3 2

    16 June-15:09:24 craniosacral core: area 1: CPU level scale 0-3

    16 June-15:09:24 craniosacral core: groups: 2-3 0-1

    16 June-15:09:24 craniosacral core: area 2: 0 to 3 level NŒUD

    16 June-15:09:24 craniosacral core: groups: 0-3

    16 June-15:09:26 cranio python: hp-systray (init) [4261]: warning: no hp:

    or hpfax: found in any devices installed CUPS queue. On the way out.

    16 June-15:12:08 cranio su: (to root) on/dev/pts/3 craniosacral

    16 June-15:20:29 craniosacral core: 0000:03:00.0: eth0: link is

    16 June at 15:20:31 cranio kernel: 0000:03:00.0: eth0: link is up to 10 Mbps Half Duplex, flow control: no

    16 June at 15:20:31 cranio kernel: 0000:03:00.0: eth0: speed 10/100: deactivation of OSI

    16 June-15:20:52 cranio kernel: 0000:03:00.0: eth0: link is

    16 June-15:21:11 craniosacral core: 0000:03:00.0: eth0: link is up to 10 Mbps Half Duplex, flow control: no

    16 June-15:21:11 craniosacral core: 0000:03:00.0: eth0: speed 10/100: deactivation of OSI

    16 June-15:21:40 craniosacral core: 0000:03:00.0: eth0: link is

    16 June-15:21:42 cranio kernel: 0000:03:00.0: eth0: link is up to 10 Mbps Half Duplex, flow control: no

    16 June-15:21:42 cranio kernel: 0000:03:00.0: eth0: speed 10/100: deactivation of OSI

    16 June-15:25:45 cranio su: (to root) craniosacral on/dev/pts/2

    16 June-15:37:34 cranio shutdown [5167]: closing for the stopping of the system

    16 June-15:37:34 cranio init: switching to runlevel: 0

    16 June-15:37:36 craniosacral core: bootsplash: State on console 0 passed to it

    16 June-15:37:36 cranio smartd [4065]: signal smartd received 15: completed

    16 June-15:37:36 cranio smartd [4065]: Device: / dev/sda , State

    Written at

    / var/lib/smartmontools/smartd. WDC_WD2500YS_18SHB2 - WD_WCANY4470774.ata.state

    16 June-15:37:36 cranio smartd [4065]: smartd fate (exit status 0)

    16 June-15:37:36 cranio Squid: spleen squid.conf line 61: acl IPliberados src "/ etc/squid/IPliberado.

    16 June-15:37:36 cranio sshd [4126]: received signal 15. closing.

    16 June-15:37:36 cranio avahi-daemon [3447]: Got SIGTERM, quit smoking.

    16 June-15:37:36 cranio avahi-daemon [3447]: leaving mDNS group multicast on the eth1 interface. IPv4 with the address 192.168.1.1.

    16 June-15:37:36 cranio avahi-daemon [3447]: leaving mDNS group multicast on the eth0 interface. IPv4 address 10.1.1.2.

    16 June-15:37:36 cranio auditd [3428]: error sending request (operation not supported) signal_info

    16 June-15:37:36 cranio auditd [3428]: the demon of the audit came out.

    16 June-15:37:37 cranio watchdog-webAccess: PID file not found /var/run/vmware/watchdog-webAccess.PID

    16 June-15:37:37 cranio watchdog-webAccess: unable to put an end to watchdog: can not find the process

    16 June-15:37:38 cranio squid [4150]: Squid Parent: child process 4152 stopped due to the signal 15

    16 June-15:37:38 cranio squid [4150]: leave due to the unexpected forced shutdown

    16 June-15:37:39 cranio rpcbind: rpcbind ending signal. Restart with 'rpcbind w.

    16 June-15:37:39 craniosacral core: core record (proc) stopped.

    16 June-15:37:39 cranio kernel: kernel log demon fencing.

    16 June-15:37:39 cranio syslog-ng [2043]: requested via the signal termination endpoint;

    16 June-15:37:39 cranio syslog-ng [2043]: syslog-ng closing; version = '2.0.9'

    16 June-15:39:06 cranio syslog-ng [2090]: syslog-ng, commissioning; version = '2.0.9'

    16 June-15:39:07 cranio rchal: CPU frequency scaling is not supported by your processor.

    16 June-15:39:07 cranio rch

    Pessoal preciso da ajuda você urgent!

    Tenhu um servidor linux Suse 11.1 com um VMware server 2.0 wont,.

    Detro not VMware I have a Windows 2003 server that e meu archives servidor e application.

    Ok.

    BOM o than acontece e não estavamos enviar pelo archives access

    E SINTEGRA nem para o n CONECTIVA da Caixa nem o Outlook funcionava.

    To repent squid funcionar then parou o a VMware caiu... CAIU internet caiu tudo!

    Sintegra, conectividade social, etc. nao tem nada a ver com a VM e sim com seu transparent proxy. (squid).

    Como não pelo browser I tried access could undergo a VM manualmente (# /etc/rc.d e depois o comando #. / vmware-autostart start)

    O programa of retornou o seguinte erro:

    VMware Server is installed, but it has not been configured (correctly)

    for the kernel running. To (re-) set, invoke the

    following command: usr/bin/vmware-config.pl.

    VC tem as rodar este mesmo script. Provavelmente vc Fez any atualizacao no seus (kernel) sistema e necessariamente o VMServer deve ser reconfigurado.

    > The EU so me ter visto isso ha single 3 meses atraz quando EU estava

    Instalando o VMware e era por causa back modulo kernel... EU resolvi

    ISSO instalando o pacote 'Kernel-sources referring ao meu kernel.

    Exato. VC precisa back fonts make the kernel.

    [of]

    Ancker

  • need help to run a script from cron

    Here's what I'm trying to get running (not the script complete, but all I need for this example):

    #! / bin/sh

    esxcfg-info - s & gt; /tmp/info.txt

    output

    This script works from the command line and info.txt is filled with loads of data. If I run it from cron, I don't get any errors, but the file has no data (0KB).

    I run other scripts of shell cron with no problems, but they are not call esxcfg orders. I guess that's because cron does not run as root as I am from the command line. Can anyone help to get this shell script to run or what do I need to get cron to complete (perhaps as root)? I don't want examples of perl, that my shell script is much more and I'm not a perl programmer, so I does the conversion.

    Thank you!

    You must specify the path complete the command that you try to run or any other entry Besides, since the projected TRACK can not set when the cron job started.

    who can help you locate that

    [root@himalaya Devastator-2009-05-22--2]# which esxcfg-info
    /usr/sbin/esxcfg-info
    

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    repository scripts vGhetto

    VMware Code Central - Scripts/code samples for developers and administrators

    http://Twitter.com/lamw

    If you find this information useful, please give points to "correct" or "useful".

  • Cron failed when I run ghettoVCB.sh

    Hi all

    ghettoVCB.sh works correctly when I run manually, but when I run as a cron, it does not work:

    1. crontab-l

    32 18 22 1 * /tmp/ghettoVCB.sh/tmp/backup_vms & gt; /tmp/LOG_Backup.log 2 & gt; & 1

                                  1. Taking the snapshot backup for DACDCU01_BACKUP... #.

    /tmp/ghettoVCB.sh: line 202: vmkfstools: command not found.


    & gt;?

    Deleting snapshot of DACDCU01_BACKUP...

                                          1. The backup is completed for DACDCU01_BACKUP! ####################

    Start time: Thu Jan 22 16:33:01 THIS 2009

    End time: Thu Jan 22 16:33:09 THIS 2009

    Time: 8 seconds

    Completed the backup of Virtual Machines specified!

    What is the error?

    Tanks a lot.

    FYI this was corrected in fact a few weeks back (check the change log) and the latest version of the script is available at: ghettoVCB.sh - alternative free for virtual machine backup of for ESX (i) 3.5, 4.x & 5.x

    =========================================================================

    William Lam

    VMware vExpert 2009

    Scripts for VMware ESX/ESXi and resources at: http://engineering.ucsb.edu/~duonglt/vmware/

    repository scripts vGhetto

    VMware Code Central - Scripts/code samples for developers and administrators

    150 VMware developer

    Twitter: @lamw

    If you find this information useful, please give points to "correct" or "useful".

Maybe you are looking for