iNode files

Can I safely delete files iNode in lost + found without causing problems to the OS / X?  I have two of these files of more than 5Gigabytes each.

Chances are very good that you can.

Files of this size, have been generally proven installers for OS X that I've lost.

You can check this post

<Re: iNode large file in lost + found>

in particular, the response of the 'amyseqmedia' that has been marked "useful".

Tags: Mac OS & System Software

Similar Questions

  • Files that are opened in a bound iCloud application are instantly copied to the application folder in iCloud.

    When I open a file in any folder in the finder, in an application that is linked to iCloud, i.e. 'Expert PDF' or 'Preview', I opened, the file will be copied to the application folder instantly. This only happens on Mac.

    On Mac, the file appears in the folder something like a shortened (as in the picture):

    If I open the mobile application (i.e. PDF Expert) there is a copy of the file in the application folder.

    I first thought that would happen only with Expert PDF, but I discovered that it happens with any application that is enabled in the settings of the iCloud drive. It's really annoying to remove them, every time that you open the. That this feature was actually supposed to do, is create a folder in iCloud drive that is assigned to the application on any device to iCloud.

    Maybe someone knows how to turn it off, or maybe it's just a bug.

    It looks like a feature and not a bug.

    iCloud is not really duplicate files in the files collection of application.  It looks like a file duplicate of the same size as the original, that you have opened, but iCloud is record storage by creating symbolic links. .

    I checked several of apparent duplicates in the Terminal, and if I list the file with the Terminal command

    LS - li somefileoniclouddrive

    I see that the inodes are identical to the original and the duplicate.  So no additional storage is required.

  • Impossible to mount the drive external HARD under Ubuntu Linux - file system damaged and MFT

    I get the following error when I try to mount my external hard drive on my Ubuntu.

    UNABLE TO MOUNT
    Error mounting/dev/sdc1/media/khalibloo/Khalibloo2: command-line ' mount t 'ntfs' o ' uhelper = udisks2, nodev, nosuid, uid is 1000, gid = 1000, ask dm = 0077, fmask = 0177 "" / dev/sdc1 "" / media/khalibloo/Khalibloo2 "' came out with the exit status no lame 13: ntfs_attr_pread_i: ntfs_pread failed: input/output error".
    Cannot read MFT, mft = 6 count = 1 br = - 1: input/output error
    Could not open the FILE_Bitmap inode: input/output error
    Failed to mount ' / dev/sdc1 ': input/output error

    NTFS is incompatible, or there is a hardware failure, or it's a SoftRAID(4)/FakeRAID hardware. In the first case run chkdsk /f on Windows, and then restart Windows twice. Using the /f parameter is very important! If the device is a SoftRAID(4)/FakeRAID then frst activate and mount a different device under the/dev/map/directory, (for example/dev/map/nvidia_eahaabcc1). See the documentation for 'dmraid' for more details.

    It is not mount on windows either: "i/o device error".

    This is a disk with a single partition hard ntfs

    Of course, I tried chkdsk /f. reported several file segments as unreadable, he didn't say if he fixed them or not (apparently not). also tried with the indicator/b.

    ntfsfix reported as corrupted volume.

    TestDisk was able to correct a small error with the partition table by adding the '80' for the active partition indicator (only). TestDisk also confirmed that the boot sector was very good and it corresponded to the backup. However, when you try to repair the MFT, he could not read the MFT. He could not also the list of files on the hard drive. It says file system may be damaged.

    Active @ also shows that the MFT is missing or damaged.

    I tried to replace the USB cable. Has not changed anything. I tried to update the driver. Windows reported that he was already up to date. Yes, the pilot has been activated.

    So, how can I fix the file system? or MFT? or the I/o device error?
    I used the HARD drive for months. This problem started 2 days ago. I also tried to mount it on other computers, without success.

    Hello

    I'm sorry to read about problems with your drive.
    Unfortunately, I think that the question is misplaced a bit as your case appears to be the health of your rather external HARD drive that a specific problem related to Linux.

    To add clips, if the tools you use have failed to fix your drive, then I do not think that a user 'normal' forum will no longer.
    I know it's a fine recommendation and probably just add salt in your wounds: If you really care for files, do a sector backup before doing any file system / partition own modifications.
    The reason is very simple - if the tool you use has made a mistake when judging what actions are the right ones, and submit these changes, you move your goal - and fast!
    Instead of repairing, the tool can simply corrupt data even further.
    Any subsequent change aggravates the disorder exponentially.

    Judging by the details in your message, I'm sorry to say that I think your drive is defective, or at least the data is lost.
    Of course some companies will probably be able to restore it, but that normally costs a fortune.

    Unless you are willing to spend astronomical amounts of money on data rescue, so I suggest to accept data loss and look at what you can do with the HARD drive.
    Try to format. No problem, this indicates the drive is defective and must be replaced.

    Sorry for so little help.

    Tom BR

  • El captain, cannot delete files in the Recycle Bin

    Strange, but I downloaded a widget, and when I try to delete, El Captain never allows me to do... I tried various Apple Help instructions, but they all fail. Here are some results from the example of the console:

    https://support.Apple.com/en-CA/TS1402

    with http://StackOverflow.com/questions/2642147/How-to-Remove-Files-and-directories-q

    DC_iMAC:. Trash can MAC_at_home$ ls - la

    Total 16

    Personal MAC_at_home of drwx - 4 136 29 Mar 11:13.

    drwxr-xr-x + 51 personal MAC_at_home 1734 26 Mar 11:53...

    -rw - r - r-@ 1 6148 29 Mar 11:13 personal MAC_at_home. DS_Store

    drwxr-xr-x 3 root 102 26 Mar 12:35 Clip.wdgt Web admin

    DC_iMAC:. Basket MAC_at_home$ rm - df "Clip.wdgt Web".

    RM: Clip.wdgt Web: directory not empty

    DC_iMAC:. Basket MAC_at_home$ rm - dfRi "Clip.wdgt Web".

    Look at the files in the Directory Web Clip.wdgt? There

    Look at the files in the Directory Web Clip.wdgt/WebClip.plugin? There

    Look at the files in the Directory Web Clip.wdgt/WebClip.plugin/Contents? There

    Look at the files in the Directory Web Clip.wdgt/WebClip.plugin/Contents/MacOS? There

    delete Clip.wdgt/WebClip.plugin/Contents/MacOS/WebClip Web? There

    RM: Clip.wdgt/WebClip.plugin/Contents/MacOS/WebClip Web: operation not permitted

    delete Clip.wdgt/WebClip.plugin/Contents/MacOS Web? There

    RM: Clip.wdgt/WebClip.plugin/Contents/MacOS Web: permission denied

    delete Clip.wdgt/WebClip.plugin/Contents Web? There

    RM: Clip.wdgt/WebClip.plugin/Contents Web: permission denied

    delete Clip.wdgt/WebClip.plugin Web? There

    RM: Clip.wdgt/WebClip.plugin Web: permission denied

    delete Clip.wdgt Web? There

    RM: Clip.wdgt Web: directory not empty

    I also tried sudo rm... and used my pw for the user account, but it does not work

    I have an administrator account on the operating system as well, so I recorded it and tried to erase the inode, but he wouldn't let me leave the user account.

    I think I need to get the full rights of the Admin account while I am in the user account...

    any ideas?

    Try to keep the Option key while emptying the trash.

  • How can I determine what files given File Record Segments Point to?

    I'm under NTFS, and I have a volume of 2 TB with a lot of files that it contains.  Fortunately, I cloned the entire volume to another volume of 2 TB on occasion (using WinDD, basically an exact bit-by-bit image of the entire 2 TB).  Unfortunately, the volume, I used to work failed, and my last clone was old enough.  To remedy this, I switched to Linux and ddrescue to get as many recovered defective disc I have.  When I finally gave up the recovery (after weeks, but it seemed that it would take years to get a lot more value), I stayed with 1.3MiB evil space in two or three sections of the volume, especially in the 1kiB sections (generally there was 2 bad sectors, followed by a number of sectors well followed by 2 more bad sectors etc.).  The volume had clusters of 64 Kio, which left still potentially a lot of corruption.  After all this, I took the volume newly cloned (from bad disk) back to windows and ran chkdsk.  He finds 1 segment of corrupted file, 336 segments of the orphan file and 440 bad index entries.  So he deleted all those (which included about 6 files that moved to the chkdsk finds the file, the rest was individual files which are now gone).  I do not know what files have disappeared, but I know that most of the lost files are probably on my good old clone of the volume.  Comparing volumes per-folder basis, would be a nightmare, and I suppose cannot easily use a tool (or find a free) because of the number of changes that have not been saved.  I have the registered chkdsk log so I can sort through this.  Can someone tell me a good way to know what files the segments to record file deleted perpendicularly on the volume where they have not been removed?  For example, here are two chkdsk log records:

    Delete a corrupt file record segment 1107859.

    Delete a 1069930 an orphan file record segment.

    Since I know specifically what segment to save file, I need to look on the good clone, can I somehow extract file information about the MFT file segment numbers?

    Notice that I must not use XP to do, which is be where the volume has been created and repaired.

    Well, I finally had the chance to try some Linux ntfs-based utilities.  I'm not sure they are NTFS - 3g, important or ntfsutils, but all those who have a favorite linux distribution should be able to fire and get the information I needed easily.  The numbers are numbers of inodes, and two orders were very useful.  It has provided what I need:

    ntfscluster - I

    It included the full path to the file in question and very little information.  This command has provided a lot of information that I didn't need:

    NTFSInfo-i

    He did not provide the path, but provide the number of inode in the directory parent, so you can reverse engineer the path using this command again and again.

  • Persistent ghosts - leakage of storage files.

    Hi people,

    I worked through a somewhat worrisome question lately: If you QFile.remove () a file that MediaPlayer plays or MediaPlayer.pause () or more recently MediaPlayer.stop () would be the file is deleted, but the disk taken up by the file storage to become available - never.

    The first time I came across this, I was perplexed: the total file storage to SD card used and free storage space don't add to the capacity of the card and there is no way to recover the lost space other than formatting the drive.

    I have also confirmed that it is not a thing of file system, but can also occur on the QNX6 file on the storage of the phone system. What happens when the phone storage is filled with ghosts files? You must do a security wipe!

    There seems to be two issues here:

    1. There is an easy way to remove the placeholder for the file in the file system that is not giving up the space used. (which can lead to the need to reformat your phone/card) I guess it can be considered a storage leak.

    2. There is no obvious way deleting a file that has been recently buffered in the MediaPlayer.

    Here is the output of command line I captured when the file _wasn't_ play:

    $ ls-l
    Total 105107
    -rw-rw-rw-1 devuser 1000_shared 53814712 Feb 08 19:23 sn0441.mp3
    $ df - kP.
    1024 - filesystem blocks used available capacity mounted on
    / dev/EMMC/user0 13826032 9625376 4200656 70%.
    $ df - kP.
    1024 - filesystem blocks used available capacity mounted on
    / dev/EMMC/user0 13826032 9572792 4253240 70%.

    You can see that 9625376-9572792 = 52584 k, is about the size of the file.

    If I do the same thing with a file that is at stake:

    $ df - kP.
    1024 - filesystem blocks used available capacity mounted on
    / dev/EMMC/user0 13826032 9625408 4200624 70%.

    $ ls-l
    Total 222732
    -rw-rw-rw-1 devuser 1000_shared 60223698 Feb 08 19:17 sn0438.mp3
    -rw-rw-rw-1 devuser 1000_shared 53814712 Feb 08 19:23 sn0441.mp3

    > File is deleted the app here.

    $ ls-l
    Total 105107
    -rw-rw-rw-1 devuser 1000_shared 53814712 Feb 08 19:23 sn0441.mp3
    $ df - kP.
    1024 - filesystem blocks used available capacity mounted on
    / dev/EMMC/user0 13826032 9625376 4200656 70%.

    9625408 - 9625376 is 32 k. ot anywhere near 96 MB.

    I can also confirm that this issue has been observed on the 10.0 and 10.1, 10.2.1.

    Has anyone else seen anything like this? I'd certainly appreciate advice on how to remove a file that has been recently used by the MediaPlayer.

    See you soon,.

    Eric

    Here's the response of the people of file system.  Hope this helps clear things upwards!

    By POSIX behavior, content of the file must not be released until all open in the file handles are released.  If we cannot free up space (stop for example), we will reclaim the space during startup for QNX6 file system.
     
    In the FAT formats, it's different because the file system has no concepts such as the names and the POSIX inode.  Emulate us the behavior in our implementation, but the lack of security features power means that it is possible to lose this space if the media is removed or system power off with the open file handles.  You can use the mass storage mode or place the card in a BONE to fill a control and correction of the host system.

  • 0 blocks free PTR - cannot create new files on the data store

    We have been experiencing problems trying to power on virtual machines. When attempting to power on virtual machines, we see the error "cannot extend the pagefile from 0 KB to 2097152 KB".

    We checked the .vswp file are created in the folder of the Virtual Machine on the data store. Connection to the ESXi host, we have seen the following in vmkernel.log error message:

    (2016 01-16 T 21: 19:40.556Z cpu1:4971732) WARNING: Res3: 6984: "freenas-6-ds": [rt 3] No. Space - has not found enough resources after the second pass! (requis_:_1,_trouvé_:_0) 2016-01 - 16 T 21: 19:40.556Z cpu1:4971732) Res3: 6985: "freenas-6-ds": [rt 3] resources t 0, e 0, PN 16, BM 0, b 0, RCs u 0, i 0, 4031 nf, pe 0, 0 2016-01-16 T 21 oe: 19:40.556Z cpu1:4971732) WARNING: SwapExtend: 683: impossible to extend the pagefile from 0 KB to 2097152 KB.

    This was surprising given that we have about 14 TB of space available on the data store:

    [root@clueless:~] df h

    Size of filesystem used available use % mounted on

    VMFS-5 20.0 T 5.4 T 14.6 T/vmfs/volumes/freenas-six-ds 27%

    However, when we use "dd" to write a 20 GB file, we would get "no space left on device:

    [root@clueless:/vmfs/volumes/55a00d31-3dc0f02c-9803-025056000040/deleteme] dd if = / dev/urandom of = deleteme bs = 1024 count = 2024000

    DD: writing "deleteme": no space is available on the device

    263734 + 0 records in

    out 263733 + 0 reviews

    [root@clueless:/vmfs/volumes/55a00d31-3dc0f02c-9803-025056000040/deleteme] ls - lh deleteme

    -rw - r - r - 1 root root 19 Jan 255,1 M 01:02 deleteme

    We checked that we have free inodes:

    The ramdisk name system include in reserved Coredumps used Maximum reserved free use pic free maximum allocated Inodes used Inodes Inodes Mount Point

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    root of true true 32768 KiB 32768 KiB KiB KiB 99% 99% 9472 4096 3575 176 176.

    true true etc 28672 KiB 28672 KiB 284 KiB 320 KiB 99% 99% 4096 1024 516/etc

    Choose true true 0 KiB KiB 0 KiB KiB 0 100% 0% 8 1024 8192 32768 / opt

    var true true 5120 KiB 49152 484 516 99% 90% 8192 384 379 KiB KiB KiB / var

    tmp false false 2048 KiB 262144 KiB 20 KiB 360 KiB 99% 99% 8 256 8192/tmp

    false false hostdstats KiB 310272 KiB 3076 KiB 3076 KiB 99 0% 0% 8192 32 5/var/lib/vmware/hostd/stats


    We believe that our cause is due to have 0 free blocks of PTR:

    [root@clueless:/vmfs/volumes/55a00d31-3dc0f02c-9803-025056000040] vmkfstools Pei - v 10/vmfs/volumes/freenas-six-ds.

    System file VMFS-5, 61 extending on 1 partition.

    File system label (if applicable): freenas-six-ds

    Mode: public TTY only

    Capacity 21989964120064 (blocks of files 20971264 * 1048576), 16008529051648 (15266923 blocks) prevail, max supported size of the 69201586814976 file

    Volume creation time: Fri Jul 10 18:21:37 2015

    Files (max / free): 130000/119680

    Blocks of PTR (max / free): 64512/0

    Void / blocks (max / free): 32000/28323

    The secondary blocks of Ptr (max / free): 256/256

    Drop blocks (approve/used/approve %): 0/5704341/0

    Blocks of PTR (approve/used/approve %): 64512/0/0

    Void / blocks (approve/used/approve %): 3677/0/0

    Size of volume metadata: 911048704

    UUID: 55a00d31-3dc0f02c-9803-025056000040

    Logical unit: 55a00d30-985bb532-BOI.30-025056000040

    Partitions split (on 'lvm'):

    NAA.6589cfc0000006f3a584e7c8e67a8ddd:1

    Instant native is Capable: YES

    OBJLIB-LIB: ObjLib cleaned.

    WORKER: asyncOps = 0 maxActiveOps = 0 maxPending = 0 maxCompleted = 0

    When we turn off a virtual machine, it will release 1 block of PTR and we would be able to on another VM / create the 20 GB file using "dd". Once we reached 0 free blocks of PTR, we are unable to create new files.

    Can anyone give any suggestions on how we may be able to clear the blocks PTR? We have already tried to restart all services of management on all ESXi hosts connected.

    FreeNAS is not running on a virtual machine.

    We solved the problem by finding a lot PTR blocks have been used by many of our models of virtual machine. Remove the disk models solved the problem.

  • R610 SAS6IR with SSD RAID 1 file system Corruption

    Hi all!  I worked on a recurring problem with some Dell R610 servers upgraded with the ruhnning controller SAS6IR a pair of Kingston SSD 256 GB in a RAID1 configuration.  I have 5 of these servers and are all identical and all have the same issue, this is why I'm leaning - point anyway - more than a question of settings rather than material because I swapped around the servers as well as SSDS them has tested in a R710 and the question arises in all R610s but the 710 using the same disks is not the issue.

    Yes, on the real problem: corruption after being shut down the system of files and shut down by the operating system - not power failure or power button.

    I tried to use Ubuntu Linux, RHEL7, Windows Server 2 k 8 and I saw the question in the O/Ss Linux but not Windows - same XenServer worked perfectly after a weekend 4 days, which lends to thinking me it is on Linux.  All appears well in the main bios and 6IR as much as all the disks/RAID volumes and run diagnostics on readers shows that they are all in good health.  I can install and reboot until the cows come home, home and there is no problem.  But when I shut down the system completely (this is a test environment and not yet in in production environment) using the option to stop the operating system and with elegance stops the system and turning off the power, but not to disconnect the supply plug/UPS when I turn back - usually after be closed from one day to the next, but also did the same an hour or similarly - messages just fine and starts the boot sequence.  The matrix RAID and Volume is found, and the operating system starts the startup.  It was then that the system will come to a prompt instead of the graphical interface or will stop with an error of file system corruption saying there are some deleted inodes.  I am trying to download the Dell Diagnostic DVD and let's see what he comes up with, but this issue has been frustrating me for almost 2 months now that I struggle to find what is the cause.

    Any additional information required, please let me know.  Thank you in advance.

    Scott

    I discovered that it was indeed the RAID controllers in the R610.  The particular version of the PERC 6 was not compatible with the Ubuntu kernel and it managed the SSD correctly.  After their replacement by the PERC 6 / i with backup battery, all that survived a weekend stop.  Thanks for all the suggestions and help.

  • VMware Workstation 9: could not open/dev/vmmon: no such file or directory

    I'll have a plethora of problems installing and get VMware Workstation 9 (9.0.2 build-1031769) to work on Ubuntu 13.04 (kernel 3.8.0 - 26-generic).

    I experienced exactly the same process on my laptop and it works perfectly, while on the desktop, it seems that everything breaks. Could someone give me some kind of help / advice on how I can do to get this working. I tried full uninstall / reinstall of the software, headers of the kernel, kernel images, different kernels, patches here on fresh installs of everything I can think of. Here are the issues I encounter in the order of importance (or so I think):

    (1) whenever I try to open my new (and virtual) machine, I get the error sequence of:

    "Could not open/dev/vmmon: no such file or directory. " Please make sure that the "vmmon" kernel module is loaded. '--> 'Failed to initialize the device monitor. "->" cannot change the power state of VM: could not find a valid peer to connect to '


    (2) every time I try to run the following command to configure the modules on a completely fresh setting (reinstalled kernel, vmware) I get the following output:

    sudo modconfig vmware - console - install-all

    Stopping VMware services:

    Demon of authentication makes VMware

    VM communication interface socket family is

    Virtual machine communication interface is

    Virtual machine monitor makes

    File system is blocking

    Using 2.6.x kernel build system.

    do: enter the directory "/ tmp/modconfig-G9S0TM/vmmon-only '.

    / usr/bin/make /lib/modules/3.8.0-26-generic/build/include/ - c... SUBDIRS = $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = modules

    make [1]: Entering directory ' / usr/src/linux-headers-3.8.0-26-generic'

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/linux/driver.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/linux/driverLog.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/linux/hostif.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/apic.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/comport.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/cpuid.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/hashFunc.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/memtrack.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/phystrack.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/task.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/common/vmx86.o

    CC [M] /tmp/modconfig-G9S0TM/vmmon-only/vmcore/moduleloop.o

    LD [M] /tmp/modconfig-G9S0TM/vmmon-only/vmmon.o

    Construction modules, step 2.

    MODPOST modules 1

    Warning: "mcount" [/ tmp/modconfig-G9S0TM/vmmon-only/vmmon.ko] undefined!

    CC /tmp/modconfig-G9S0TM/vmmon-only/vmmon.mod.o

    LD [M] /tmp/modconfig-G9S0TM/vmmon-only/vmmon.ko

    make [1]: leaving directory ' / usr/src/linux-headers-3.8.0-26-generic'

    / usr/bin/make - c $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = postgeneration

    make [1]: entering directory ' / tmp/modconfig-G9S0TM/vmmon-only '.

    make [1]: 'postgeneration' is up-to-date.

    make [1]: leaving directory ' / tmp/modconfig-G9S0TM/vmmon-only '.

    CP f vmmon.ko. /... vmmon.o

    make: leaving directory ' / tmp/modconfig-G9S0TM/vmmon-only '.

    Using 2.6.x kernel build system.

    make: entering directory ' / tmp/modconfig-G9S0TM/vmnet-only»

    / usr/bin/make /lib/modules/3.8.0-26-generic/build/include/ - c... SUBDIRS = $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = modules

    make [1]: Entering directory ' / usr/src/linux-headers-3.8.0-26-generic'

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/driver.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/hub.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/userif.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/netif.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/bridge.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/filter.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/procfs.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/smac_compat.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/smac.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/vnetEvent.o

    CC [M] /tmp/modconfig-G9S0TM/vmnet-only/vnetUserListener.o

    LD [M] /tmp/modconfig-G9S0TM/vmnet-only/vmnet.o

    Construction modules, step 2.

    MODPOST modules 1

    Warning: "mcount" [/ tmp/modconfig-G9S0TM/vmnet-only/vmnet.ko] undefined!

    CC /tmp/modconfig-G9S0TM/vmnet-only/vmnet.mod.o

    LD [M] /tmp/modconfig-G9S0TM/vmnet-only/vmnet.ko

    make [1]: leaving directory ' / usr/src/linux-headers-3.8.0-26-generic'

    / usr/bin/make - c $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = postgeneration

    make [1]: entering directory ' / tmp/modconfig-G9S0TM/vmnet-only.

    make [1]: 'postgeneration' is up-to-date.

    make [1]: leaving directory ' / tmp/modconfig-G9S0TM/vmnet-only.

    CP f vmnet.ko. /... vmnet.o

    make: leaving directory ' / tmp/modconfig-G9S0TM/vmnet-only.

    Using 2.6.x kernel build system.

    do: enter the directory "/ tmp/modconfig-G9S0TM/vmblock-only.

    / usr/bin/make /lib/modules/3.8.0-26-generic/build/include/ - c... SUBDIRS = $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = modules

    make [1]: Entering directory ' / usr/src/linux-headers-3.8.0-26-generic'

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/block.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/control.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/dentry.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/file.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/filesystem.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/inode.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/stubs.o

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/module.o

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/control.c: in function 'ExecuteBlockOp ':

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/control.c:285: warning: assignment of incompatible pointer type

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/control.c:296: warning: passing argument 1 of the "putname" incompatible pointer type

    include/Linux/FS.h:2052: Note: expected ' struct filename *' but argument is of type ' char *'

    CC [M] /tmp/modconfig-G9S0TM/vmblock-only/linux/super.o

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/inode.c:49: warning: initialization from incompatible pointer type

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/dentry.c:38: warning: initialization from incompatible pointer type

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/dentry.c: in function 'DentryOpRevalidate ':

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/dentry.c:104: warning: argument passing 2 of 'actualDentry-> d_op-> d_revalidate' makes whole pointer without a cast

    /tmp/modconfig-G9S0TM/vmblock-only/Linux/dentry.c:104: Note: expected 'unsigned int' but argument is of type ' struct nameidata *'

    LD [M] /tmp/modconfig-G9S0TM/vmblock-only/vmblock.o

    Construction modules, step 2.

    MODPOST modules 1

    Warning: "mcount" [/ tmp/modconfig-G9S0TM/vmblock-only/vmblock.ko] undefined!

    Warning: "putname" [/ tmp/modconfig-G9S0TM/vmblock-only/vmblock.ko] undefined!

    CC /tmp/modconfig-G9S0TM/vmblock-only/vmblock.mod.o

    LD [M] /tmp/modconfig-G9S0TM/vmblock-only/vmblock.ko

    make [1]: leaving directory ' / usr/src/linux-headers-3.8.0-26-generic'

    / usr/bin/make - c $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = postgeneration

    make [1]: entering directory ' / tmp/modconfig-G9S0TM/vmblock-only.

    make [1]: 'postgeneration' is up-to-date.

    make [1]: leaving directory ' / tmp/modconfig-G9S0TM/vmblock-only.

    CP f vmblock.ko. /... vmblock.o

    make: leaving directory ' / tmp/modconfig-G9S0TM/vmblock-only.

    Using 2.6.x kernel build system.

    do: enter the directory "/ tmp/modconfig-G9S0TM/vmci-only.

    / usr/bin/make /lib/modules/3.8.0-26-generic/build/include/ - c... SUBDIRS = $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = modules

    make [1]: Entering directory ' / usr/src/linux-headers-3.8.0-26-generic'

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/linux/driver.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/linux/vmciKernelIf.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciContext.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciDatagram.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciDoorbell.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciDriver.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciHashtable.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciEvent.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciQPair.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciQueuePair.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciResource.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/common/vmciRoute.o

    CC [M] /tmp/modconfig-G9S0TM/vmci-only/driverLog.o

    LD [M] /tmp/modconfig-G9S0TM/vmci-only/vmci.o

    Construction modules, step 2.

    MODPOST modules 1

    Warning: "mcount" [/ tmp/modconfig-G9S0TM/vmci-only/vmci.ko] undefined!

    CC /tmp/modconfig-G9S0TM/vmci-only/vmci.mod.o

    LD [M] /tmp/modconfig-G9S0TM/vmci-only/vmci.ko

    make [1]: leaving directory ' / usr/src/linux-headers-3.8.0-26-generic'

    / usr/bin/make - c $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = postgeneration

    make [1]: entering directory ' / tmp/modconfig-G9S0TM/vmci-only.

    make [1]: 'postgeneration' is up-to-date.

    make [1]: leaving directory ' / tmp/modconfig-G9S0TM/vmci-only.

    CP f vmci.ko. /... VMCI.o

    make: leaving directory ' / tmp/modconfig-G9S0TM/vmci-only.

    Using 2.6.x kernel build system.

    do: enter the directory "/ tmp/modconfig-G9S0TM/vsock-only '.

    / usr/bin/make /lib/modules/3.8.0-26-generic/build/include/ - c... SUBDIRS = $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = modules

    make [1]: Entering directory ' / usr/src/linux-headers-3.8.0-26-generic'

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/linux/af_vsock.o

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/linux/notify.o

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/linux/notifyQState.o

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/linux/stats.o

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/linux/util.o

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/linux/vsockAddr.o

    CC [M] /tmp/modconfig-G9S0TM/vsock-only/driverLog.o

    LD [M] /tmp/modconfig-G9S0TM/vsock-only/vsock.o

    Construction modules, step 2.

    MODPOST modules 1

    Warning: "mcount" [/ tmp/modconfig-G9S0TM/vsock-only/vsock.ko] undefined!

    CC /tmp/modconfig-G9S0TM/vsock-only/vsock.mod.o

    LD [M] /tmp/modconfig-G9S0TM/vsock-only/vsock.ko

    make [1]: leaving directory ' / usr/src/linux-headers-3.8.0-26-generic'

    / usr/bin/make - c $PWD SRCROOT = $PWD. \

    MODULEBUILDDIR = postgeneration

    make [1]: entering directory ' / tmp/modconfig-G9S0TM/vsock-only '.

    make [1]: 'postgeneration' is up-to-date.

    make [1]: leaving directory ' / tmp/modconfig-G9S0TM/vsock-only '.

    CP f vsock.ko. /... vsock.o

    make: leaving directory ' / tmp/modconfig-G9S0TM/vsock-only '.

    Starting VMware services:

    Virtual machine monitor failed

    Virtual machine communication interface failed

    Family of VM communication interface socket failed

    File system is blocking

    Virtual Ethernet failed

    Demon of authentication makes VMware

    Cannot start the services

    (3) I often get the Ubuntu "this program has planted" messages on "/ usr/lib/vmware-installer/2.1.0/vmis-launcher" "' (this message of crash always occurs when you run the"vmware-modconfig"above command on a new installation.

    (4) at different times during Assembly and have VMware workstation on my system I get Ubuntu hangs on "/ usr/lib/vmware/bin/appLoader" and even "/ bin/su" (especially when you are trying to install - su NEVER blocks anywhere else).

    Thoughts / ideas on what is happening on earth / what could have caused this all go so downhill / how to go about fixing?

    Thank you

    Solved.

    The problem is that for another program I used gcc - 4. 4 instead of later. Restore the (by update-alternatives - config ) for gcc, g ++ and CPC - bin.

    Not a very graceful exit to have a version not valid I would say, maybe some sort of basic check could be added to ensure that the correct versions are present?

  • Problems of configuration of SNMP 5.1 ESXi - .trp files accumulate after that SNMP is disabled

    Hello

    We had 2 problems with our new servers ESXi 5.1 has:

    1 vMotions to fail 13%

    2. some of the virtual machines are not accessible through their consoles. (Impossible to contack MKS)

    To resolve these issues, I came across the following KB:

    http://KB.VMware.com/selfservice/microsites/search.do?language=en_US & cmd = displayKC & externalId = 2040707

    I did the step necessary and even created a new host profile where the SNMP agent is disabled. However, on a successful remmediation of the new profile, .trp files continue to accumulate in/var/spool/snmp (above 3000 files in 2 days) and I have no idea what creates these files of trap and why.

    Anyone can shed light on this issue?

    This is solved in the latest version of vSphere 5.1 update 1.

    Release notes: https://www.vmware.com/support/vsphere5/doc/vsphere-esxi-51u1-release-notes.html

    ESXi host 5.x appears disconnected in vCenter Server and logs the ramdisk (root) is full message in the vpxa.log file
    If Simple Network Management Protocol (SNMP) is unable to handle the number of trap SNMP (.trp) files in the foldersnmp/var/spool/snmp of ESXi, the host may appear as disconnected in vCenter Server. You may not be able to perform all the tasks on the host computer.
    The vpxa.log contains several entries that are similar to the following:
    WARNING: VisorFSObj: 1954: unable to create the file
    /var/run/VMware/f4a0dbedb2e0fd30b80f90123fbe40f8.lck for vpxa process because the table of the inode of the ramdisk (root) is full.
    WARNING: VisorFSObj: 1954: unable to create the file
    / var/run/VMware/Watchdog-vpxa. PID to treat sh, because the inode of the ramdisk (root) table is full.

    This issue is fixed in this version.

  • Free INODES and % free RAMDISK

    Hi all

    I need to create a PowerCLI script to gather some informaion after 30 hosts x ESXi 5.0 on a daily basis. Here's the equivalent information in a SSH session with the required information in the Red - but I want to do this without using SSH on each host and bring it together manually;

    ~ # stat f.

    File: ' / '.

    ID: 1 Namelen: 127 Type: visorfs

    Block size: 4096

    Blocks: Total: 449852 free: 330092 available: 330092

    Inode: Total: 8192 free: 5562

    ~ # esxcli system visorfs ramdisk list

    Ramdisk name reserved system used Maximum free reserved used pic free

    ------------  ------  ---------  -----------  ---------  ---------  ----- -------------

    root real 32768 KiB 32768 KiB 1476 1596 KiB KiB 95% 95%

    etc true 28672 28672 264 316% 99% 99% KiB KiB KiB KiB

    tmp 2048 false 196608 0 340% 100% 100% KiB KiB KiB KiB

    hostdstats 0 false KiB KiB 14212 KiB 14212 KiB 98% 0% 1078272

    I am aware that the Get-EsxCli cmdlet exists, but have failed to operate properly.

    NAME
    Get-EsxCli

    LOGBOOK
    Exposes the functionality of the ESXCLI.

    Is this possible and if so, please help with pointers or, ideally, a basic script to collect information - much appreciated!

    Thasnk,

    Jon

    To get this information for all of your guests and get also the use of host name:

    foreach ($VMHost in (Get-VMHost)) {
      $esxcli = Get-EsxCli -vmhost $VMHost
      $esxcli.system.visorfs.get() |
      Add-Member -MemberType NoteProperty -Name VMHost -Value $VMHost.Name -PassThru
    }
    
  • Deleting the files from control 11 GR 2 OEL5 - strange behavior

    Hi all

    I installed OEL5 on vmware WS9 and edition of Enterprize 11 GR 2 running.

    I tried to simulate the loss and the eventual recovery of the control as file:

    1. Enter the parameter control_files

    NAMETYPEVALUE

    ------------------------------------ ----------- ------------------------------

    control_filesstring/ u01/app/oracle/oradata/ORCL/c
    ontrol01. CTL / u01/app/oracle /.
    flash_recovery_area/orcl/control
    ol02. CTL, / u01/app/oracle/RADO
    ATA/ORCL/control03. CTL / u01/a
    pp/oracle/oradata/orcl/control
    ctl 04.the

    2 remove the control file ' / u01/app/oracle/oradata/orcl/control04.ctl ' using the OS utility

    -Ideally, as far as I KNOW from my reading, at this point the Forum should have crashed.

    Instead, he continued on. I could even create a table and logfile after removal of the control file.

    Journal Alerts had no enteries related to the loss of control until that point file.

    3. Nevertheless, moving on I tried to stop immediately. This gave the error:

    ORA-00210: could not open the specified control file

    ORA-00202: control file: ' / u01/app/oracle/oradata/orcl/control04.ctl'

    ORA-27041: could not open the file

    Linux error: 2: no such file or directory

    Additional information: 3

    and also filled the alerts log

    4 the judgment

    and subsequent startup nomount, followed by changing the parameter CONTROL_FILES it fixed.

    Shouldn't the Instance of crash if one of the multiplexes of the controlfile copies is lost?

    If something is changed in the 11 GR 2 - a new feature?

    Concerning

    Kuber

    With any * NIX system when you issue 'rm filename' & the file is actually held OPEN by 1 or more processes.

    everything that is done is that the inode entry is deleted from the directory file.

    In fact, nothing happens to the file according to rm licensing.

    Only after the last closed process the file & releases the file handle is actually returned to the operating system disk space.

    Problem exists between keyboard and Chair

  • Commit complete, although the current recovery log file online have been removed.

    Even though I have deleted current online redo log file in the linux operating system (Linux of Oracle), when I hit 'commit' he says it committed "complete."

    It's just for this princip? *: "if only when all redo records associated with a given transaction are safely drive in online newspapers is the user process notified that the transaction has been posted." *

    I think that it can lead to data loss in some cases... I am using Oracle 11 g R2 on OEL (x 64)...

    Can someone explain to me? I'm stuck in this situation...

    PS: I don't have multiplexes current group ENT files...

    and these transactions are committed. It is in fact means that these transactions are not committed

    WHAT!
    In Oracle, if the transaction is committed - it is committed.

    I can't understand how the database allows users to validate transactions that are not actually written to the disc... is the concern of the operating system or the Oracle database?

    Your understanding is wrong.
    The newspaper is actually written to disk. This is the case. Even the name of the file is removed from the directory, the file is physically existing and can write. And all will be physically stored in the sectors of the disk HARD (or SSD).
    However, if the file is deleted, which means that his name is no longer in the directory, next time, Oracle will try to open it (by name) will fail because it will not conclude it in dir.
    This is why it is very important in this case do not stop at Oracle, but take the backup with RMAN and better to restore files deleted in OS using their inodes and the old names.

  • Alert entered for the missing data file

    I'm trying a simple payback scenario: remove a data file to a tablespace that is normal when the database is running, and the data file has been saved.

    Before, the control point ends normally (I guess that still retains the file inode handle Oracle). When I choose to dba_data_files, the error occurred:
    SQL> alter system checkpoint;
    
    System altered.
    
    SQL> select count(*) from dba_data_files;
    select count(*) from dba_data_files
                         *
    ERROR at line 1:
    ORA-01116: error in opening database file 10
    ORA-01110: data file 10: '/u01/oradata/data01.dbf'
    ORA-27041: unable to open file
    Linux Error: 2: No such file or directory
    Additional information: 3
    To my surprise, nothing has been recorded in the alerts log. And no trace of the files not generated in udump. And Oracle has refused to restore the data file when he was running.

    The message appeared in the journal of alerts when trying to stop the database. After abort shutdown (Oracle refused to stop immediately) and restart the instance, I was able to restore and recover the data file.

    This is the expected behavior? If so, how you would detect files of missing when db is running?


    Oracle: 9.2.0.6, OS: RHEL3

    1. it has not detected by the control point caused by a switch logfile because the file is not actually missing. Like other - inded you! -have, Unix/Linux, you can remove a file at any time, but if the file is in use, the inodes are allocated and the process using it continues to see it very well because it. Because, at the level of the inode, it IS absolutely still there! Either by the way, it is needless to say to make a logfile switch, because as data files are concerned, which does nothing, that the command of alter system checkpoint that you already tried.

    I used to do demos of how to recover from the complete loss of a controlfile in Oracle University classes: I would like to make a burst of rm *.ctl commands and then significantly question the command checkpoint... and nothing happened. Very demoralizing the first time I did it. Deliver a force of startup, however here the inodes are released at that time, files are actually deleted and start falls in nomount State. Bouncing the instance was the only way I could get the data base to protest against the removal of a data file, too.

    2. No, if a file has disappeared in a way allowing to detect Oracle, it wouldn't matter if the folder was empty because he's worried the presence/absence of the datafile headers. They need to be updated, even if nothing else does.

    3. it would be in the alerts log if SMON has detected the lost file at startup (one of his jobs is, explicitly, to verify the existence of all the files mentioned in the control file). CKPT would not trigger an alert, because concerned, all right. Inodes are still held, after all. Your attempts to create tables and so forth inside the lost file do not generate display errors (I think) at this time, it's your server process that seeks to do things for the file, not CKPT or pre-existing other background process, constantly running.

    4. it's specific to Unix, anyway, that Linux is a Variant.

  • moving slowly to a file in Sierra

    I just installed Sierra and when I try to import a file into FileMaker Pro 14 it takes long time painfully for the window showing which file choose. Same thing in the Mail when I try to choose an attachment.

    Does anyone else have this problem?

    If you have installed 'just' Sierra as you say, there is much current activity as for example the Spotlight is re-indexing system, Time Machine is it backup, Photos are analyzed for facial recognition etc. etc.

    What is the Applications > utilities > Activity Monitor show used most of the time CPU?

Maybe you are looking for