date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,319,313,753,000
I am trying to automate adding Homebrew to my path in a shell script, but these two lines do not evaluate inside my shell script: #!/bin/sh eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" The second line of code runs the program [home]brew with the argument shellenv. It echoes several environment variables to set which are then evaluated. The first line just adds the second line to my ~/.profile so brew's PATH is setup in every shell. I noticed that this statement does not work in my shell script, but it does when I type it into my terminal. What might be causing this? I tried running the commands I was instructed to run after Homebrew installed in a shell script and got the same behavior. similar question: problem with homebrew installation path on linux
Running the script creates a local environment, inherited from the invoking environment, in which the script is executing and which ultimately is destroyed when the script terminates. I'm first assuming that by "does not evaluate" and "not working", you are observing that the brew shellenv command does not appear to be outputting anything when you run the script and that you, therefore, believe that the environment in the script is not initialised correctly for use with Homebrew. (NOTE: This interpretation is probably less correct after the question was updated with further clarifications. See after the divider below instead.) The command brew shellenv only outputs the shell commands that set the appropriate environment variables for Homebrew if the necessary variables don't already have the correct values. The documentation you get through brew help shellenv explains exactly what the command does. The following is from a macOS system, but it'll likely read similarly on Linux (my emphasis added): Usage: brew shellenv Print export statements. When run in a shell, this installation of Homebrew will be added to your PATH, MANPATH, and INFOPATH. The variables HOMEBREW_PREFIX, HOMEBREW_CELLAR and HOMEBREW_REPOSITORY are also exported to avoid querying them multiple times. To help guarantee idempotence, this command produces no output when Homebrew's bin and sbin directories are first and second respectively in your PATH. Consider adding evaluation of this command's output to your dotfiles (e.g. ~/.profile, ~/.bash_profile, or ~/.zprofile) with: eval "$(brew shellenv)" What may be happening is that your script is executing in an environment where the PATH variable already contains the bin and sbin directories, as described by the text above. If you are instead trying to get your script to change the invoking shell's environment, you should be running your script with . or source instead, which would cause the commands in the file to be executed in the current environment (which also means that the #!-line would be ignored). With regards to this, see also How to alter PATH within a shell script? What is the difference between sourcing ('.' or 'source') and executing a file in bash?
Command output evaluation not working in Bash script [duplicate]
1,319,313,753,000
I'd like to execute a statement to start a server. For that I have an environment variable to determine which server is to be started. I was given this command as a starting point: eval "exec gunicorn --chdir /this/dir package.sub:call_main() -b 0.0.0.0:80" As I have a few kinds of servers to start up, I would like to parameterise the script. And after searching around, I found out the quotations are redundant. So what I have now is: APP=main eval exec gunicorn --chdir /this/dir package.sub:call_${APP}() -b 0.0.0.0:80 This, however produces a syntax error near unexpected token '('. Ideally I would even like to have a default argument like ${APP:-main}, but I guess that is possible once the syntax error issue is resolved. What is wrong with the statement above? Additionally, is eval or even exec needed here?
In your second piece of code, you have removed the double quotes around the argument to eval. Don't do that. Removing them would make () special to the shell (it starts a sub-shell). Instead: app=main eval "exec gunicorn --chdir /this/dir package.sub:call_$app'()' -b 0.0.0.0:80" The parentheses still has to be quoted here as eval re-evaluates the string. The $app variable expansion would be done before eval is called. or, app=main eval "exec gunicorn --chdir /this/dir 'package.sub:call_$app()' -b 0.0.0.0:80" which may look nicer. Note that ${APP} and $APP are identical in every way except when immediately followed by a character that is valid in a variable name (as in "${APP}x"). Here, the {...} is not needed. Also, use lower-case variable names to avoid accidental clashes with existing environment variable. I don't think either of eval or exec is needed here. The string does not seem to be needing re-evaluation with eval and exec would replace the current shell process with gunicorn (I don't know whether this is what you want or not). It may be enough with app=main gunicorn --chdir /this/dir "package.sub:call_$app()" -b 0.0.0.0:80 Note the double-quoting. Related: When is double-quoting necessary? Security implications of forgetting to quote a variable in bash/POSIX shells Are there naming conventions for variables in shell scripts?
Eval and exec with variable substitution
1,319,313,753,000
Tracking down strange behavior a bash script resulted in the following MWE: set -o errexit set -o nounset set -x my_eval() { eval "$1" } my_eval "declare -A ASSOC" ASSOC[foo]=bar echo success fails with: line 9: foo: unbound variable. Yet it works if eval is used in place of my_eval (and, obviously, if the declare is done directly, without any indirection). Why does evalling a declare statement in a function not work the same as doing it outside of a function? I'm using GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu), from the popular Ubuntu distribution of Linux.
A glance at the man pages tells us: The -g option forces variables to be created or modified at the global scope, even when **declare** is executed in a shell function. Thus, if your script would say: my_eval "declare -gA ASSOC" it/you would be happier. The point is that the "declare" statement sees its scope at where it is executed/evaluated, and not at where it is written.
why doesn't eval declare in a function work in bash?
1,319,313,753,000
In a bash script: jenkins_folder=`cut -d "|" -f1 -s input.csv` jenkins_url='https://url.com:8181/jenkins/view/' echo "jenkins_folder : ${jenkins_folder}" for job in `java -jar jenkins-cli.jar -s ${jenkins_url}${jenkins_folder} list-jobs ${jenkins_folder} ` do echo "Job name:: ${job} ****" java -jar jenkins-cli.jar -s ${jenkins_url}${jenkins_folder} get-job ${job} > job.xml done is giving following output jenkins_folder : Platform-X.X.X-SPO-MyPD-Integration-Dummy ****ame:: NH-AccountManagementAudit-Consumer-MyPD-Integration-DUMMY-Reporting '; perhaps you meant 'NH-AccountManagementAudit-Consumer-MyPD-Integration-DUMMY-Reporting'? And when I substitute values of all variables and run following command; it works fine java -jar jenkins-cli.jar -s https://url.com:8181/jenkins/view/Platform-X.X.X-SPO-MyPD-Integration-Dummy get-job NH-AccountManagementAudit-Consumer-MyPD-Integration-DUMMY-Reporting > job.xml I have wasted too much time on this. I even tried using eval function but no luck. Please help. Thanks to @Kusalananda when I am trying to echo job name its printing output in wierd fashion.. I feel this is the root cause.. but not sure why that is happening If I try to shorten the length of variable job (using substring); then it prints fine. Hence, if value of job is long; it is creating problem
The first issue is obvious: your url is https://url.com:8181/jenkins/view, so since it doesn't end with a slash, the value of ${jenkins_url}${jenkins_folder} is https://url.com:8181/jenkins/viewPlatform-X.X.X-SPO-MyPD-Integration-Dummy. So fix the url: jenkins_url='https://url.com:8181/jenkins/view/' Or, add the slash when you combine variables: ${jenkins_url}/${jenkins_folder} Next, remember to always quote your variables: java -jar jenkins-cli.jar -s "${jenkins_url}${jenkins_folder}" get-job "$job" > job.xml Your other issue is that it sounds like you have a file or command output with Windows line endings. That might be input.csv or anything else, but you will need to remove them. First run dos2unix input.csv Or, if you don't have dos2unix, run: sed -i 's/\r//' input.csv Then try your script again. If it still doesn't work, update your question with more details. If the \r is from your jenkins-cli command, try this: jenkins_folder=`cut -d "|" -f1 -s input.csv` jenkins_url='https://url.com:8181/jenkins/view/' echo "jenkins_folder : ${jenkins_folder}" java -jar jenkins-cli.jar \ -s "${jenkins_url}/${jenkins_folder}" \ list-jobs "$jenkins_folder" | sed 's/\r$//' | while read job do echo "Job name:: ${job} ****" java -jar jenkins-cli.jar -s ${jenkins_url}${jenkins_folder} get-job ${job} > job.xml done
Shell script: Using variables makes command fails ( substituting values of variables manually ; command works fine )
1,319,313,753,000
My code docker-machine create --driver virtualbox dev VBoxManage list vms "minikube" {9c326ed5-faf4-42fe-acda-bf3a283f1a74} "kalinew" {de6de631-0d51-4638-b967-66db463cbf05} "dev" {84a116bf-02b9-48e3-809a-f5232518c8ee} Then eval "$(docker-machine env dev)" My goal was to check with echo echo $dev Got empty line. Why?
Because docker-machine env dev doesn't set the environment variable dev, it sets environment variables for the host called dev. Run docker-machine env dev without the eval to see what environment variables get set. Also compare with the output of docker-machine env (without dev) if your docker-machine configuration is wrong somewhere, but still has a reasonable default.
How to check "eval" command?
1,319,313,753,000
From https://unix.stackexchange.com/a/7739/674, why does parameter expansion happen before brace expansion in the following example? eval rm foo.{$ext0..$extN}
It's doing a no-op brace expansion first (expanding to what you started with), then parameter expansion, then brace expansion within "eval": # No-op brace expansion example: echo foo.{not..understood} # See: foo.{not..understood} ext0=10 extN=20 # Parameter expansion echo foo.{$ext0..$extN} # See: foo.{10..20} # Final brace expansion eval echo foo.{$ext0..$extN} # See: foo.10 foo.11 foo.12 foo.13 foo.14 foo.15 foo.16 foo.17 foo.18 foo.19 foo.20 # Now if you want to have fun with it (each eval unwraps one level of escapes) eval eval eval eval echo foo.{\\\\\\\$ext0..\\\\\\\$extN} # See: foo.10 foo.11 foo.12 foo.13 foo.14 foo.15 foo.16 foo.17 foo.18 foo.19 foo.20
Why does parameter expansion happen before brace expansion in this example?
1,357,727,051,000
I have two folders on the same partition (EXT2) If I mv folder1/file folder2 and some interruption occur (e.g. power failure) could the file system ever end up being inconsistent? Isn't the mv operation atomic? Update: So far on IRC I got the following perspectives: it is atomic so inconsistencies cannot happen first you copy the dir entry in the new dir and then erase entry on previous dir, so you may have the inconsistency of having a file referenced twice, but the ref count is 1 it first erases the pointer and then copy the pointer so the inconsistency is that the file has reference 0 Can someone clarify?
First, let's dispel some myths. it is atomic so inconsistencies cannot happen Moving a file inside the same filesystem (i.e. the rename) system call is atomic with respect to the software environment. Atomicity means that any process that looks for the file will either see it at its old location or at its new location; no process will be able to observe that the file has a different link count, or that the file is present in the source directory after being present in the destination directory, or that the file is absent from the target directory after being absent in the source directory. However, if the system crashes due to a bug, a disk error or a power loss, there is no guarantee that the filesystem is left in a consistent state, let alone that the move isn't left half-done. Linux does not in general offer a guarantee of atomicity with respect to hardware events. first you copy the dir entry in the new dir and then erase entry on previous dir, so you may have the inconsistency of having a file referenced twice, but the ref count is 1 This refers to a specific implementation technique. There are others. It so happens that ext2 on Linux (as of kernel 3.16) uses this particular technique. However, this does not imply that the disk content goes through the sequence [old location] → [both locations] → [new location], because the two operations (add new entry, remove old entry) are not atomic at the hardware level either: it is possible for one of them to be interrupted, leaving the filesystem in an inconsistent state. (Hopefully fsck will repair it.) Furthermore the block layer can reorder writes, so the first half could be committed to disk just before the crash and the second half would then not have been performed. The reference count will never be observed to be different from 1 as long as the system doesn't crash (see above) but that guarantee does not extend to a system crash. it first erases the pointer and then copy the pointer so the inconsistency is that the file has reference 0 Once again, this refers to a particular implementation technique. A dangling file cannot be observed if the system doesn't crash, but it is a possible consequence of a system crash, at least in some configurations. According to a blog post by Alexander Larsson, ext2 gives no guarantee of consistency on a system crash, but ext3 does in the data=ordered mode. (Note that this blog post is not about rename itself, but about the combination of writing to a file and calling rename on that file.) Theodore Ts'o, the principal author of the ext2, ext3 and ext4 filesystems, wrote a blog post on the same issue. This blog post discusses atomicity (with respect to the software environment only) and durability (which is atomicity with respect to crashes plus a guarantee of commitment, i.e. knowing that the operation has been performed). Unfortunately I can't find information about atomicity with respect to crashes alone. However, the durability guarantees given for ext4 require that rename is atomic. The kernel documentation for ext4 states that ext4 with the auto_da_alloc option (which is the default in modern kernels), as well as ext4, provides a durability guarantee for a write followed by a rename, which implies that rename is atomic with respect to hardware crashes. For Btrfs, a rename that overwrites an existing file is guaranteed to be atomic with respect to crashes, but a rename that does not overwrite a file can result in neither file or both files existing. In summary, the answer to your question is that not only is moving a file not atomic with respect to crashes on ext2, but it isn't even guaranteed to leave the file in a consistent state (though failures that fsck cannot repair are rare) — pretty much nothing is, which is why better filesystems have been invented. Ext3, ext4 and btrfs do provide limited guarantees.
Can the filesystem become inconsistent if interrupted when moving a file?
1,357,727,051,000
Why don't ext2/3/4 need to be defragmented? Is there no fragmentation at all?
Modern filesystems, particularly those designed to be efficient in multi-user and/or multi-tasking use cases, do a good fairly job of not fragmenting data until filesystems become near to full (there is no exact figure for where the "near to full" mark is as it depends on how large the filesystem is, the distribution of file sizes and what your access patterns are - figures between 85% and 95% are commonly quoted) or the pattern of file creations and writes is unusual or the filesystem is very old so has seen a lot of "action". This includes ext2/3/4, reiser, btrfs, NTFS, ZFS, and others. There is currently no kernel-/filesystem- level way to defragment ext3 or 4 at the moment (see http://en.wikipedia.org/wiki/Ext3#Defragmentation for a little more info) though ext4 is planned to soon gain online defragmentation. There are user-land tools (such as http://vleu.net/shake/ and others listed in that wikipedia article) that try defragment individual files or sets of files by copying/rewriting them - if there is a large enough block of free space this generally results in the file being given a contiguous block. This in no way guarantees files are near each other though so if you run shake over a pair of large files you migth find is results in the two files being defragmented themselves but not anywhere near each other on the disk. In a multi-user filesystem the locality of files to each other isn't often important (it is certainly less important then fragmentation of the files themselves) as the drive heads are flipping all over the place to serve different user's needs anyway and this drowns out the latency bonus that locality of reference between otherwise unfragmented files would give but on a mostly single user system it can give measurable benefits. If you have a filesystem that has become badly fragmented over time and currently has a fair amount of free space then running something like shake over all its files could have the effet you are looking for. Another method would be to copy all the data to a new filesystem, remove the original, and then copy it back on again. This helps in much the same way shake does but may be quicker for larger amounts of data. For small amounts of fragmentation, just don't worry about it. I know people who spend more time sat watching defragmentation progress bars than they'll ever save (due to more efficient disk access) in several lifetimes of normal operation!
Defragging an ext partition?
1,357,727,051,000
I was reading the Kernel Documentation where it says There are various limits imposed by the on-disk layout of ext2. Other limits are imposed by the current implementation of the kernel code. Many of the limits are determined at the time the filesystem is first created, and depend upon the block size chosen. The ratio of inodes to data blocks is fixed at filesystem creation time, so the only way to increase the number of inodes is to increase the size of the filesystem. For 4Kb block size, the file size is 2048GB. I have also read that during data block allocation it uses direct, double or triple indirection to data blocks. Whether it is the main factor ?
The 2TiB file size is limited by the i_blocks value in the inode which indicates the number of 512-bytes sector rather than the actual number of ext2 blocks allocated. Referenced from: http://www.nongnu.org/ext2-doc/ext2.html
What determines the maximum file size in ext2 file system
1,357,727,051,000
What is the difference between disabling journal on ext4 file system using: tune2fs -O ^has_journal /dev/sda1 and using data=writeback when mounting? I thought ext4 - journal = ext2. means when we remove journal from a ext4 file system, it is automatically converted to ext2(thus we can not benefit from other ext4 features)
The two are in no way equivalent. Disabling the journal does exactly that: turns journaling off. Setting the journal mode to writeback, on the other hand, turns off certain guarantees about file data while assuring metadata consistency through journaling. The data=writeback option in man(8) mount says: Data ordering is not preserved - data may be written into the main filesystem after its metadata has been committed to the journal. This is rumoured to be the highest- throughput option. It guarantees internal filesystem integrity, however it can allow old data to appear in files after a crash and journal recovery. Setting data=writeback may make sense in some circumstances when throughput is more important than file contents. Journaling only the metadata is a compromise that many filesystems make, but don't disable the journal entirely unless you have a very good reason.
disabling journal vs data=writeback in ext4 file system
1,357,727,051,000
summary Suppose one is setting up an external drive to be a "write-once archive": one intends to reformat it, copy some files that will (hopefully) never be updated, then set it aside until I need to read something (which could be a long while or never) from the archive from another linux box. I also want to be able to get as much filespace as possible onto the archive; i.e., I want the filesystem to consume as little freespace as possible for its own purposes. specific question 1: which filesystem would be better for this usecase: ext2, or ext4 without journaling? Since I've never done the latter before (I usually do this sort of thing with GParted), just to be sure: specific question 2: is "the way" to install journal-less ext4 mke2fs -t ext4 -O ^has_journal /dev/whatever ? general question 3: is there a better filesystem for this usecase? or Something Completely Different? details I've got a buncha files from old projects on dead boxes (which will therefore never be updated) saved on various external drives. Collectively size(files) ~= 250 GB. That's too big for DVDs (i.e., would require too many--unless I'm missing something), and I don't have a tape drive. Hence I'm setting up an old USB2 HFS external drive to be their archive. I'd prefer to use a "real Linux" filesystem, but would also prefer a filesystem that consumes minimum space on the archive drive (since it's just about barely big enough to hold what I want to put on it. will be readable from whatever (presumably Linux) box I'll be using in future. I had planned to do the following sequence with GParted: [delete old partitions, create single new partition, create ext2 filesystem, relabel]. However, I read here that recent Linux kernels support a journal-less mode of ext4 which provides benefits not found with ext2 and noted the following text in man mkfs.ext4 "mke2fs -t ext3 -O ^has_journal /dev/hdXX" will create a filesystem that does not have a journal So I'd like to know Which filesystem would be better for this usecase: ext2, or ext4 without journaling? Presuming I go ext4-minus-journal, is the commandline to install it mke2fs -t ext4 -O ^has_journal /dev/whatever ? Is there another, even-better filesystem for this usecase?
I don't agree with the squashfs recommendations. You don't usually write a squashfs to a raw block device; think of it as an easily-readable tar archive. That means you would still need an underlaying filesystem. ext2 has several severe limitations that limit its usefulness today; I would therefore recommend ext4. Since this is meant for archiving, you would create compressed archives to go on it; that means you would have a small number of fairly large files that rarely change. You can optimize for that: specify -I 128 to reduce the size of individual inodes, which reduces the size of the inode table. You can play with the -i option too, to reduce the size of the inode table even further. If you increase this value, there will be less inodes created, and therefore the inode table will also be smaller. However, that would mean the filesystem wastes more space on average per file. This is therefore a bit of a trade-off. You can indeed switch off the journal with -O ^has_journal. If you go down that route, though, I recommend that you set default options to mount the filesystem read-only; you can do this in fstab, or you could use tune2fs -E mount_opts=ro to record a default in the filesystem (you cannot do this at mkfs time) you should of course compress your data into archive files, so that the inode wastage isn't as bad a problem as it could be. You could create squashfs images, but xz compresses better, so I would recommend tar.xz files instead. You could also reduce the number of reserved blocks with the -m option to either mkfs or tune2fs. This sets the percentage (set to 5 by default) which is reserved for root only. Don't set it to zero; the filesystem requires some space for efficient operation.
"write-once archive": ext2 vs ext4^has_journal vs
1,357,727,051,000
How to remove a file that is corrupted? In Linux (Fedora based), when I type: ls -l I get drwxr-xr-x. 2 dmiller3 dmiller3 4096 Jul 26 13:57 SomeFile ?????????? ? ? ? 4096 Jul 26 13:57 CorruptedFile I can't do anything with this CorruptedFile. I can't use it in delete or anything. It's the only file in the entire system that is like this. What causes this, and how can I remove it? File system is ext2.
you could have been writing to a file during a hard reset, or your hard drive could have problems. a fsck should fix it (you will have to umount the fs to do this). I'd check dmesg and smartctl -a /dev/hdx (latter is part of smartmontools ) to see if your HD is reporting any errors. I'd also run a non-destructive badblocks on the partition. You should also ask yourself why you are running ext2, because journaling tends to help with these kinds of problems.
Remove a corrupted file in a Linux system
1,357,727,051,000
Is a journaling filesystem needed in today's desktop world? A good OS doesn't kernel panic every month, and if we are using a laptop, then there aren't any power outages, so why shouldn't we use ext2 as the standard filesystem on a desktop or laptop?
Hardware can still randomly glitch or fail from time-to-time. There are so many components involved in writing a file to storage - CPU, RAM, HDD, I/O BUS, etc. It's not just power-outages or reboots that can cause file-system corruption. That said, it's still okay to use EXT2, just don't complain if something goes wrong. I would only use it for non-critical things like transporting data on a USB stick. For my critical data, I use data mirroring on top of EXT3/4.
Is ext2 suitable for daily use on a desktop or laptop?
1,357,727,051,000
Is the ext2 filesystem good for /boot partition? I set ext4 for / root partition, but wasn't sure which filesystem to select for the /boot partition, and I just set ext2. Does it matter in this case?
It only matters if you're going to use the ancient GRUB, ext4 is only supported by GRUB2. ext2 is simple, robust and well-supported, which makes it a good choice for /boot.
Ext2 filesystem for /boot partition
1,357,727,051,000
I'm about to setup my new USB key with Grub or Grub2. In the old days I used ext2 for the boot partition. I'm wondering if I could use ext4 for Grub2? And if use Grub 0.9X, what about support of ext3?
Grub legacy (0.9x) supports ext2 and ext3 (ext3 is backward compatible with ext2) but not ext4 (unless you've turned off the backward-incompatible features, which doesn't leave much additional goodness compared with ext3). The development of Grub legacy stopped before ext4 was mature. There are unofficial patches to support ext4 on Grub legacy; the discussion on Debian bug #511121 has a pointer to two patches (one of which is in some versions of Ubuntu). Grub2 (1.9x, more precisely since 1.97) supports ext2, ext3 and ext4, with the same module (ext2.mod). None of the new features of ext4 are particularly useful for a separate /boot partition, so if that's what you have, you might as well stick to ext2. But if you keep your kernel and Grub configuration on the root partition, if it's ext4, make sure your Grub version is recent enough or patched.
Ext4 support in Grub 0.9X (legacy) and Grub 1.9X (Grub2)
1,357,727,051,000
In The Design and Implementation of a Log-Structured File System, It says: It takes at least five separate disk I/Os, each preceded by a seek, to create a new file in Unix FFS: two different accesses to the file’s attributes plus one access each for the file’s data, the directory’s data, and the directory’s attributes. What's the "two different accesses to the file’s attributes" ? I can count once only which is the inode was created.
Prof. Remzi Arpaci-Dusseau's book Operating Systems: Three Easy Pieces has an aside about file creation: As an example, think about what data structures must be updated when a file is created; assume, for this example, that the user creates a new file /foo/bar.txt and that the file is one block long (4KB). The file is new, and thus needs a new inode; thus, both the inode bitmap and the newly allocated inode will be written to disk. The file also has data in it and thus it too must be allocated; the data bitmap and a data block will thus (eventually) be written to disk. Hence, at least four writes to the current cylinder group will take place (recall that these writes may be buffered in memory for a while before they take place). But this is not all! In particular, when creating a new file, you must also place the file in the file-system hierarchy, i.e., the directory must be updated. Specifically, the parent directory foo must be updated to add the entry for bar.txt; this update may fit in an existing data block of foo or require a new block to be allocated (with associated data bitmap). The inode of foo must also be updated, both to reflect the new length of the directory as well as to update time fields (such as last-modified-time). Overall, it is a lot of work just to create a new file! Perhaps next time you do so, you should be more thankful, or at least surprised that it all works so well. Comparing the two, I speculate that the authors included the data block update with file attribute access (even though they explicitly say that what they mean by file attributes are the "inode", it doesn't seem unreasonable to consider data location as a file attribute). At any rate, it looks like they've understated the disk accesses: it needs at least 6 from Prof. Arpaci-Dusseau's description: inode bitmap inode data bitmap file data directory data directory attributes
Why does creating a file need at least five separate disk I/Os in Unix FFS?
1,357,727,051,000
In every publication I found about ext2, the structure of a block group is defined as following: Super Block: 1 block Group Descriptor: N blocks Data Bitmap: 1 block Inode Bitmap: 1 block Inode Table: N blocks Data Blocks: remaining blocks However in the ext2 kernel doc it is stated that versions >0 may not store copies of the super block and group descriptors in every block group. When I fsstat my ext2 partition, I get following output: Group: 1: Inode Range: 1977 - 3952 Block Range: 8193 - 16384 Layout: Super Block: 8193 - 8193 Group Descriptor Table: 8194 - 8194 Data bitmap: 8451 - 8451 Inode bitmap: 8452 - 8452 Inode Table: 8453 - 8699 Data Blocks: 8700 - 16384 Free Inodes: 1976 (100%) Free Blocks: 0 (0%) Total Directories: 0 Group: 2: Inode Range: 3953 - 5928 Block Range: 16385 - 24576 Layout: Data bitmap: 16385 - 16385 Inode bitmap: 16386 - 16386 Inode Table: 16387 - 16633 Data Blocks: 16387 - 16386, 16634 - 24576 Free Inodes: 1976 (100%) Free Blocks: 0 (0%) There are two things about this output that confuse me: In groups where the SB and group desc. are stored, there is a gap of 256 blocks between the group desc. and data bitmap. EDIT: Using dumpe2fs I just found out that these are reserved GDT blocks, used for online resizing. So the new question is, how is the size of these reserved GDT blocks determined? What does Data Blocks: 16387 - 16386 in Group 2 mean?
The resize_inode feature creates a hidden inode (number 7, you can view it in debugfs with stat <7>) to reserve those blocks so that the GDT can be grown. By default it reserves enough space to grow the filesystem to 1024 times its original size. You can disable the feature or adjust the size using options to mke2fs at format time. What does Data Blocks: 16387 - 16386 in Group 2 mean? This looks to simply be a bug in the program, as you can't have a negative-sized (ends before it starts) range.
Ext2 block structure: size of reserved GDT Blocks
1,357,727,051,000
I want to make ext2 file system. I want to set "number-of-inodes" option to some number. I tried several values: if -N 99000 then Inode count: 99552 if -N 3500 then Inode count: 3904 if -N 500 then Inode count: 976 But always my value is not the same. Why? I call mkfs this way sudo mkfs -q -t ext2 -F /dev/sda2 -b 4096 -N 99000 -O none,sparse_super,large_file,filetype I check results this way $ sudo tune2fs -l /dev/sda2 tune2fs 1.46.5 (30-Dec-2021) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 11111111-2222-3333-4444-555555555555 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: filetype sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 99552 Block count: 1973720 Reserved block count: 98686 Overhead clusters: 6362 Free blocks: 1967353 Free inodes: 99541 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 1632 Inode blocks per group: 102 Filesystem created: Thu Apr 6 20:00:45 2023 Last mount time: n/a Last write time: Thu Apr 6 20:01:49 2023 Mount count: 0 Maximum mount count: -1 Last checked: Thu Apr 6 20:00:45 2023 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 32 Desired extra isize: 32 Default directory hash: half_md4 Directory Hash Seed: 61ff1bad-c6c8-409f-b334-f277fb29df54
The number hasn't been ignored, it's been rounded up. It looks like space for inodes are allocated in groups. See in your output: Inodes per group: 1632 When you request 99,000 inodes, that's not divisible by 1,632. So to ensure that you get the number of inodes you requested, the number has been rounded up to 99,552 which is divisible by 1,632. It looks like this limit might be somehow derived from the number of block groups, where the number of inodes in each group is uniform across all block groups. My guess is that the number of inodes per block group is calculated as the number of inodes requested divided by the number of block groups and then rounded up to a whole number. See Ext2 on OSDev Wiki What is a Block Group? Blocks, along with inodes, are divided up into "block groups." These are nothing more than contiguous groups of blocks. Each block group reserves a few of its blocks for special purposes such as: ... A table of inode structures that belong to the group ..
mkfs ext2 ignore number-of-inodes
1,357,727,051,000
I read somewhere that an operating system which knows nothing about ext3 and ext4 (i.e. antique Linux version) is able to read/write to ext4, and it detects any ext4 file system as ext2. I am not quite sure, whether the same is possible to Tux2 to tux3 or with FAT12. (FAT64 is exFAT) How exactly is that possible? How exactly can ext4 be treated like ext2? Is there no risk for file or metadata corruption?
This depends heavily on how the ext4 filesystem was formatted. Some newer ext4 features (e.g. extents or 64bit) cannot be understood by older ext2 drivers, and the kernel would refuse to mount the filesystem (see, for example this post). In general, any filesystem formatted with a modern mke2fs with the default -t ext4 options will not be mountable by an old ext2 driver, but if the filesystem was originally formatted a long time ago, then upgraded to ext4, it may still be mountable by ext2 if none of the newer ext4-specific features were enabled. The ext2/3/4 filesystems track which features are in use by compat, rocompat, and incompat feature flags. These features are normally set at mke2fs time, but can sometimes be changed by tune2fs. If an unknown compat feature is found, the kernel will mount it, but e2fsck will refuse to check it because it might do something wrong. If an unknown rocompat feature is found, the kernel can mount the filesystem read-only, and any unknown incompat feature will prevent the filesystem from being mounted at all (a message will be printed to /var/log/messages in this case). You can use debugfs -c -R features <device> to dump the features enabled on a filesystem, for example: # debugfs -c -R features /dev/sdb1 debugfs 1.42.13.wc5 (15-Apr-2016) /dev/sdb1: catastrophic mode - not reading inode or group bitmaps Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery dirdata sparse_super large_file huge_file uninit_bg dir_nlink Though this doesn't tell you which ones are compat, rocompat, or incompat. If your version of debugfs doesn't understand some newer feature, it will print it like I0400 or similar.
How exactly is ext2 upwards-compatible?
1,357,727,051,000
I'm trying to create disk device in a file with: dd if=/dev/zero of=file.img bs=516096 count=1000 sudo losetup /dev/loop0 file.img (echo n; echo p; echo 1; echo ""; echo ""; echo w;) | sudo fdisk -u -C1000 -S63 -H16 file.img sudo mke2fs -b1024 /dev/loop0 503968 Thank i mount it with: sudo mkdir /mnt/fcd sudo mount -t ext2 /dev/loop0 /mnt/fcd and writing self-written bootloader: sudo dd if=loader.bin of=file.img bs=512 count=1 conv=notrunc and umount it with: sudo umount /dev/loop0 sudo losetup -d /dev/loop0 I have two questions: 1.I'm getting following output in fdisk: Using default response p Partition number (1-4, default 1): Using default value 1 First sector (2048-1007999, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-1007999, default 1007999): Why is first sector starts from 2048 and not from 0? Is 0-2048 for MBR in ext2 or for something else? 2.After my disk created, i execute: fdisk -l file.img And it's output is: Disk file.img: 516 MB, 516096000 bytes 255 heads, 63 sectors/track, 62 cylinders, total 1008000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Why there is no one partition? Thank you.
1.- Originally, fdisk created partitions trying to make them aligned to cylinder boundaries, leaving the first cylinder on disk free, as it would be used for the MBR, patition table and other stuff. This way, the first partition usually started on block 63 (each block being 512 bytes). The fdisk from distributions like RedHat 6.x, still works this way, but warns you about it and lets you choose the non-DOS compatible schema. Newer fdisk versions, create partitions aligned to MB boundaries, to ensure partitions are page aligned, as this makes a huge impact on performance (if you're curious about this, you can read more about on my blog: http://sinrega.org/?p=14 and http://sinrega.org/?p=13). 2048 is 1 MB (2048*512). 2.- You're overwriting the partition table when you create the ext2 filesystem on the file backed device. After creating the partition, you should attach another loop device starting at the same block as the partition you've created. In your case, as the partition starts at 2048, the offset should be 1048576 (2048 * 512): losetup -o 1048576 /dev/loop1 file.img mkfs.ext2 /dev/loop1 mount /dev/loop1 /mnt # Do whatever you want with the partition here umount /mnt losetup -d /dev/loop1 losetup -d /dev/loop0 This should do the trick.
Creating disk device in a file
1,357,727,051,000
I decided to switch from CentOS to FreeBSD 10 after I had a really good experience installing it on a Xserve G4 (PowerPC, that's a story for another day if anyone is interested). Anyway, my CentOS machine (x86) connected to an iSCSI target that held all of my data. I am now trying to connect my new FreeBSD machine to that iSCSI target and mount the partition. I have no problem connecting the target. Issuing the command # iscsictl Result: Target name Target portal State iqn.2000-01.com.synology:diskstation.linuxserver diskstation.home Connected: da0 Ok, so my drive is connected. If I do an fdisk on that particular drive, I see that the sysid = 131 which means its an ext2/ext3 partition - this is correct. fdisk /dev/da0 ******* Working on device /dev/da0 ******* parameters extracted from in-core disklabel are: cylinders=1305 heads=255 sectors/track=63 (16065 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=1305 heads=255 sectors/track=63 (16065 blks/cyl) Media sector size is 512 Warning: BIOS sector numbering starts with sector 1 Information from DOS bootblock is: The data for partition 1 is: sysid 131 (0x83),(Linux native) start 2048, size 20969472 (10239 Meg), flag 0 beg: cyl 1/ head 0/ sector 1; end: cyl 1023/ head 63/ sector 32 Here is where the issue comes in. When I try to mount the volume, I get an error message "Invalid Argument" # mount -t ext2fs /dev/da0s1 /mnt mount: /dev/da0s1: Invalid argument When I look at my /var/log/messages, I find this message: WARNING: mount of da0s1 denied due to unsupported optional features I don't know what argument it's looking for and I am not aware of any "unsupported optional features." A point in the right direction would be greatly appreciated. Update I issued the following command to manually load the ext2fs as a kernel loadable modeule as per the man page man ext2fs(5). # kldload ext2fs kldload: can't load ext2fs: module already loaded or in kernel So, it seems that support is already there, it just isn't connecting.
Answer I ran across this thread on the FreeBSD Forums. While it was nearly identical to my issue in almost every way, the main differentiating point was it was in reference to ext4, not ext2. Since ext4 is technically backward compatible with ext2/3, I I decided to take the chance and see if I could try this solution - it worked. Here's what I did to mount the drive 1) Install fusefs-ext4fuse (using the ports method) cd /usr/ports/sysutils/fusefs-ext4fuse make install clean Fuse will compline and install in about 20 seconds (that's what it took for me) then I issued the command: # kldload fuse 2) Next, I mount the drive to a mountpoint I previously created (this directory must exist). # ext4fuse /dev/da0s1 /mnt/linux Then I traverse to the directory and list the contents # cd /mnt/linux # ls .DS_Store ._foundation html .VolumeIcon.icns ._html lost+found ._. cgi-bin site-backups ._.DS_Store cron.log ._.VolumeIcon.icns foundation It works! 3) Next, I went to my NAS, created another iSCSI target and formatted it to with extFAT (or Fat32) so that it is compatible across Mac/Windows/Linux/FreeBSD. I then copied all the contents from my original drive to the new drive with the more compatible format.
Mount iSCSI ext2 Linux Partition on FreeBSD 10.2
1,357,727,051,000
using dumpe2fs on some ext4 partition, I get in the initial data, that the first inode is #11. However, if I ls -i on this disk root partition, I get that it's inode number is #2 (as expected). So... What is this “first partition” reported by dumpe2fs ?
#11 is the first "non-special" inode, that can be used for the first regularly created file or directory (usually used for lost+found). The number of that inode is saved in the filesystem superblock (s_first_ino), so technically it doesn't need to be #11, but mke2fs always sets it that way. Most of the inodes from #0 to #10 have special purposes (e.g. #2 is the root directory) but some are reserved or used in non-upstream versions of the ext filesystem family. The usages are documented on kernel.org. inode Number Purpose 0 n/a 1 List of defective blocks 2 Root directory 3 User quota 4 Group quota 5 Reserved for boot loaders 6 Undelete directory (reserved) 7 "resize inode" 8 Journal 9 "exclude" inode (reserved) 10 Replica inode (reserved)
what is this “first inode” reported by dumpe2fs?
1,357,727,051,000
I have an embedded Linux device with a read-only file system. I have another partition that is used to store an archive of logs. This partition will be written to a lot. What linux partition should I use to ensure longevity and stability? I heard the ext2-ext4 file systems use a lot of reads/writes for journaling. What about vfat? What about unexpected power interruptions?
You could safely use ext3 with noatime option: then only actual file writes would touch your flash device in write mode. The ext3fs journal is a good thing in case of embedded system that may get lack of power suddenly. I personally run this way a few Raspberry PI's equipped with simple SD memory cards for a couple of years (24/7, not backed up by UPS and with sudden power interruptions) and I did not had to replace the cards yet nor I had any problems with startup after power recovery. As for vfat, as I mentioned, journalling is an advantage. Edit: moreover, I run them with rw-mounted root fs
Embedded device, log partition, what file system is more resilient and uses less reads/writes?
1,357,727,051,000
I just saw an answer question about filesystems for embedded hardware on another Stack Exchange site. The question was "What file system format should I use on flash memory?" and the answer suggested the ext2 filesystem, or the ext3 filesystem with journaling disabled a'la tune2fs -O ^has_journal /dev/sdbX This made me wonder... What would the advantage be to using ext3 (with journaling disabled) over ext2? As far as I understood, the only real difference between the two was the journal. What other differences between ext2 and ext3 are there?
The journal is the difference. You can not have an ext3 filesystem without a journal. If you disable the journal, it becomes an ext2 filesystem again. ext4 has a number of beneficial features and can run without a journal, making it a much better choice.
Besides the journal, what are the differences between ext2 and ext3?
1,357,727,051,000
Can I safely conclude that the sticky bit is not used in current file sysems and reuse the bit for my own purpose.
No, you cannot assume that. It's not true for directories. You can make the narrower assumption that it's true for non-directory files.
Is the sticky bit not used in current file systems
1,357,727,051,000
Leaving out many details, I need to create a read/write file system on a device with the following main goals: Eliminate all writes while data is not being explicitly written. Reduce all indirect writes when data is written. Run fsck on boot after unclean unmount. Currently I am using ext3, mounted with noatime. I am not familiar with the details of ext3. In particular, is data written to an ext3 system during "idle" time when no programs are explicitly writing data (specifically, I'm thinking of kjournald and the commit= mount option)? If I switch to ext2, will that meet all the above requirements? In particular, do I have to set anything up to force an fsck after a sudden power cut? My options are fat32, ext, ext2, and ext3, plus all of the settings available via mount. Performance is not critical, neither is robustness wrt bad sectors developing over time.
You don't need to switch to ext2, you can tune ext3. You can change fsck requirements of a filesystem using tune2fs. A quick look tells me the correct command is tune2fs -c <mount-count>, but see the man page for the details. You can change how data will be written to the ext3 filesystem during mounting. You want either data=journal or data=ordered. You can further optimize journal commits via other options. Please see this page. Last but not least, on big drives fsck can take a long time while using ext3. Why don't you consider ext4 as an option? Please comment this answer if I left anything in dark.
Minimizing "idle" writes on a file system
1,357,727,051,000
I have a latency sensitive application running on an embedded system, and I'm seeing some discrepancy between writing to a ext4 partition and an ext2 partition on the same physical device. Specifically, I see intermittent delays when performing many small updates on a memory map, but only on ext4. I've tried what seem to be some of the usual tricks for improving performance (especially variations in latency) by mounting ext4 with different options and have settled on these mount options: mount -t ext4 -o remount,rw,noatime,nodiratime,user_xattr,barrier=1,data=ordered,nodelalloc /dev/mmcblk0p6 /media/mmc/data barrier=0 didn't seem to provide any improvement. For the ext2 partition, the following flags are used: /dev/mmcblk0p3 on /media/mmc/data2 type ext2 (rw,relatime,errors=continue) Here's the test program I'm using: #include <stdio.h> #include <cstring> #include <cstdio> #include <string.h> #include <stdint.h> #include <sys/mman.h> #include <sys/stat.h> #include <sys/types.h> #include <unistd.h> #include <fcntl.h> #include <stdint.h> #include <cstdlib> #include <time.h> #include <stdio.h> #include <signal.h> #include <pthread.h> #include <unistd.h> #include <errno.h> #include <stdlib.h> uint32_t getMonotonicMillis() { struct timespec time; clock_gettime(CLOCK_MONOTONIC, &time); uint32_t millis = (time.tv_nsec/1000000)+(time.tv_sec*1000); return millis; } void tune(const char* name, const char* value) { FILE* tuneFd = fopen(name, "wb+"); fwrite(value, strlen(value), 1, tuneFd); fclose(tuneFd); } void tuneForFasterWriteback() { tune("/proc/sys/vm/dirty_writeback_centisecs", "25"); tune("/proc/sys/vm/dirty_expire_centisecs", "200"); tune("/proc/sys/vm/dirty_background_ratio", "5"); tune("/proc/sys/vm/dirty_ratio", "40"); tune("/proc/sys/vm/swappiness", "0"); } class MMapper { public: const char* _backingPath; int _blockSize; int _blockCount; bool _isSparse; int _size; uint8_t *_data; int _backingFile; uint8_t *_buffer; MMapper(const char *backingPath, int blockSize, int blockCount, bool isSparse) : _backingPath(backingPath), _blockSize(blockSize), _blockCount(blockCount), _isSparse(isSparse), _size(blockSize*blockCount) { printf("Creating MMapper for %s with block size %i, block count %i and it is%s sparse\n", _backingPath, _blockSize, _blockCount, _isSparse ? "" : " not"); _backingFile = open(_backingPath, O_CREAT | O_RDWR | O_TRUNC, 0600); if(_isSparse) { ftruncate(_backingFile, _size); } else { posix_fallocate(_backingFile, 0, _size); fsync(_backingFile); } _data = (uint8_t*)mmap(NULL, _size, PROT_READ | PROT_WRITE, MAP_SHARED, _backingFile, 0); _buffer = new uint8_t[blockSize]; printf("MMapper %s created!\n", _backingPath); } ~MMapper() { printf("Destroying MMapper %s\n", _backingPath); if(_data) { msync(_data, _size, MS_SYNC); munmap(_data, _size); close(_backingFile); _data = NULL; delete [] _buffer; _buffer = NULL; } printf("Destroyed!\n"); } void writeBlock(int whichBlock) { memcpy(&_data[whichBlock*_blockSize], _buffer, _blockSize); } }; int main(int argc, char** argv) { tuneForFasterWriteback(); int timeBetweenBlocks = 40*1000; //2^12 x 2^16 = 2^28 = 2^10*2^10*2^8 = 256MB int blockSize = 4*1024; int blockCount = 64*1024; int bigBlockCount = 2*64*1024; int iterations = 25*40*60; //25 counts simulates 1 layer for one second, 5 minutes here uint32_t startMillis = getMonotonicMillis(); int measureIterationCount = 50; MMapper mapper("sparse", blockSize, bigBlockCount, true); for(int i=0; i<iterations; i++) { int block = rand()%blockCount; mapper.writeBlock(block); usleep(timeBetweenBlocks); if(i%measureIterationCount==measureIterationCount-1) { uint32_t elapsedTime = getMonotonicMillis()-startMillis; printf("%i took %u\n", i, elapsedTime); startMillis = getMonotonicMillis(); } } return 0; } Fairly simplistic test case. I don't expect terribly accurate timing, I'm more interested in general trends. Before running the tests, I ensured that the system is in a fairly steady state with very little disk write activity occuring by doing something like: watch grep -e Writeback: -e Dirty: /proc/meminfo There is very little to no disk activity. This is also verified by seeing 0 or 1 in the wait column from the output of vmstat 1. I also perform a sync immediately before running the test. Note the aggressive writeback parameters being provided to the vm subsystem as well. When I run the test on the ext2 partition, the first one hundred batches of fifty writes yield a nice solid 2012 ms with a standard deviation of 8 ms. When I run the same test on the ext4 partition, I see an average of 2151 ms, but an abysmal standard deviation of 409 ms. My primary concern is variation in latency, so this is frustrating. The actual times for the ext4 partition test looks like this: {2372, 3291, 2025, 2020, 2019, 2019, 2019, 2019, 2019, 2020, 2019, 2019, 2019, 2019, 2020, 2021, 2037, 2019, 2021, 2021, 2020, 2152, 2020, 2021, 2019, 2019, 2020, 2153, 2020, 2020, 2021, 2020, 2020, 2020, 2043, 2021, 2019, 2019, 2019, 2053, 2019, 2020, 2023, 2020, 2020, 2021, 2019, 2022, 2019, 2020, 2020, 2020, 2019, 2020, 2019, 2019, 2021, 2023, 2019, 2023, 2025, 3574, 2019, 3013, 2019, 2021, 2019, 3755, 2021, 2020, 2020, 2019, 2020, 2020, 2019, 2799, 2020, 2019, 2019, 2020, 2020, 2143, 2088, 2026, 2017, 2310, 2020, 2485, 4214, 2023, 2020, 2023, 3405, 2020, 2019, 2020, 2020, 2019, 2020, 3591} Unfortunately, I don't know if ext2 is an option for the end solution, so I'm trying to understand the difference in behavior between the file systems. I would most likely have control over at least the flags being used to mount the ext4 system and tweak those. noatime/nodiratime don't seem to make much of a dent barrier=0/1 doesn't seem to matter nodelalloc helps a bit, but doesn't do nearly enough to smooth out the latency variation. The ext4 partition is only about 10% full. Thanks for any thoughts on this issue!
One word: Journaling. http://www.thegeekstuff.com/2011/05/ext2-ext3-ext4/ As you talk about embedded im assuming you have some form of flash memory? Performance is very spiky on the journaled ext4 on flash. Ext2 is recommended. Here is a good article on disabling journaling and tweaking the fs for no journaling if you must use ext4: http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html
Ext4 exhibits unexpected write latency variance vs. ext2
1,357,727,051,000
I'm formatting a disk with following command switches, I can format the disk to ext4. sudo mke2fs -F -E lazy_itable_init=0,lazy_journal_init=0,discard -t ext4 -b 4096 ... However, once I added this switch: -O ^has_journal It will be formatted to ext2. Could you explain why?
Because ext4 is an extension of the ext2 and ext3 filesystems; one of the features that it extended was the use of a journal. References: https://ext4.wiki.kernel.org/index.php/Frequently_Asked_Questions#What_is_the_difference_between_ext2.2C_ext3.2C_and_ext4.3F https://unix.stackexchange.com/a/60757/117549
Why this switch will effectively format disk to ext2 instead of ext4?
1,481,190,816,000
What are the commands to find out fan speed and cpu temp in linux (I know lm-sensor can do the task). Is there any alternative for that?
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
Find fan speed and cpu temp in Linux
1,481,190,816,000
How is it possible to control the fan speed of multiple consumer NVIDIA GPUs such as Titan and 1080 Ti on a headless node running Linux?
The following is a simple method that does not require scripting, connecting fake monitors, or fiddling and can be executed over SSH to control multiple NVIDIA GPUs' fans. It has been tested on Arch Linux. Create xorg.conf sudo nvidia-xconfig --allow-empty-initial-configuration --enable-all-gpus --cool-bits=7 This will create an /etc/X11/xorg.conf with an entry for each GPU, similar to the manual method. Note: Some distributions (Fedora, CentOS, Manjaro) have additional config files (eg in /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/), which override xorg.conf and set AllowNVIDIAGPUScreens. This option is not compatible with this guide. The extra config files should be modified or deleted. The X11 log file shows which config files have been loaded. Alternative: Create xorg.conf manually Identify your cards' PCI IDs: nvidia-xconfig --query-gpu-info Find the PCI BusID fields. Note that these are not the same as the bus IDs reported in the kernel. Alternatively, do sudo startx, open /var/log/Xorg.0.log (or whatever location startX lists in its output under the line "Log file:"), and look for the line NVIDIA(0): Valid display device(s) on GPU-<GPU number> at PCI:<PCI ID>. Edit /etc/X11/xorg.conf Here is an example of xorg.conf for a three-GPU machine: Section "ServerLayout" Identifier "dual" Screen 0 "Screen0" Screen 1 "Screen1" RightOf "Screen0" Screen 1 "Screen2" RightOf "Screen1" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:6:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:9:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration" EndSection Section "Screen" Identifier "Screen0" Device "Device0" EndSection Section "Screen" Identifier "Screen1" Device "Device1" EndSection Section "Screen" Identifier "Screen2" Device "Device2" EndSection The BusID must match the bus IDs we identified in the previous step. The option AllowEmptyInitialConfiguration allows X to start even if no monitor is connected. The option Coolbits allows fans to be controlled. It can also allow overclocking. Note: Some distributions (Fedora, CentOS, Manjaro) have additional config files (eg in /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/), which override xorg.conf and set AllowNVIDIAGPUScreens. This option is not compatible with this guide. The extra config files should be modified or deleted. The X11 log file shows which config files have been loaded. Edit /root/.xinitrc nvidia-settings -q fans nvidia-settings -a [gpu:0]/GPUFanControlState=1 -a [fan:0]/GPUTargetFanSpeed=75 nvidia-settings -a [gpu:1]/GPUFanControlState=1 -a [fan:1]/GPUTargetFanSpeed=75 nvidia-settings -a [gpu:2]/GPUFanControlState=1 -a [fan:2]/GPUTargetFanSpeed=75 I use .xinitrc to execute nvidia-settings for convenience, although there's probably other ways. The first line will print out every GPU fan in the system. Here, I set the fans to 75%. Launch X sudo startx -- :0 You can execute this command from SSH. The output will be: Current version of pixman: 0.34.0 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Sat May 27 02:22:08 2017 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Attribute 'GPUFanControlState' (pushistik:0[gpu:0]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:0]) assigned value 75. Attribute 'GPUFanControlState' (pushistik:0[gpu:1]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:1]) assigned value 75. Attribute 'GPUFanControlState' (pushistik:0[gpu:2]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:2]) assigned value 75. Monitor temperatures and clock speeds nvidia-smi and nvtop can be used to observe temperatures and power draw. Lower temperatures will allow the card to clock higher and increase its power draw. You can use sudo nvidia-smi -pl 150 to limit power draw and keep the cards cool, or use sudo nvidia-smi -pl 300 to let them overclock. My 1080 Ti runs at 1480 MHz if given 150W, and over 1800 MHz if given 300W, but this depends on the workload. You can monitor their clock speed with nvidia-smi -q or more specifically, watch 'nvidia-smi -q | grep -E "Utilization| Graphics|Power Draw"' Returning to automatic fan management. Reboot. I haven't found another way to make the fans automatic.
How to adjust NVIDIA GPU fan speed on a headless node?
1,481,190,816,000
I am using ArchLinux on an HP Pavilion dv9000t which has overheating problems. I did all what I can do to get a better air flow in the laptop and put a better thermal paste but there is still a problem: the fan stops spinning when the CPU temperature is low (even if the GPU temperature is high, which is problematic). I found out I can get the fan running by launching some heavy processing commands (like the yes command). However, it is not a solution because I need to stop this command when the CPU gets too hot and launch it again when the fan stops (so that the GPU does not get hot). I tried to control the fan using this wiki, but when I run pwmconfig, I get this error: /usr/bin/pwmconfig: There are no pwm-capable sensor modules installed Do you know what can I do to get the fan always spinning? Edit: The sensors-dectect output is the following: ~/ sudo sensors-detect # sensors-detect revision 6170 (2013-05-20 21:25:22 +0200) # System: Hewlett-Packard HP Pavilion dv9700 Notebook PC [Rev 1] (laptop) # Board: Quanta 30CB This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No AMD Family 16h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): Using driver `i2c-i801' for device 0000:00:1f.3: Intel 82801H ICH8 Module i2c-dev loaded successfully. Next adapter: nouveau-0000:01:00.0-0 (i2c-0) Do you want to scan it? (yes/NO/selectively): Next adapter: nouveau-0000:01:00.0-1 (i2c-1) Do you want to scan it? (yes/NO/selectively): Next adapter: nouveau-0000:01:00.0-2 (i2c-2) Do you want to scan it? (yes/NO/selectively): Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) Do you want to overwrite /etc/conf.d/lm_sensors? (YES/no): Unloading i2c-dev... OK Unloading cpuid... OK The file /etc/conf.d/lm_sensors contains: HWMON_MODULES="coretemp" And the file /etc/modules-load.d/lm_sensors.conf contains: coretemp acpi-cpufreq The command sensors outputs this: ~/ sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +46.0°C (high = +85.0°C, crit = +85.0°C) Core 1: +47.0°C (high = +85.0°C, crit = +85.0°C) acpitz-virtual-0 Adapter: Virtual device temp1: +49.0°C nouveau-pci-0100 Adapter: PCI adapter temp1: +60.0°C (high = +95.0°C, hyst = +3.0°C) (crit = +115.0°C, hyst = +5.0°C) (emerg = +115.0°C, hyst = +5.0°C)
I finally decided to choose a hardware solution. I cut two wires from the fan and now the fan always spin (at the max level though). I found this solution in this blog post.
How to force the fan to always spin?
1,481,190,816,000
I am trying to find a way to access and/or control fan speed via Linux on an Intel Q45 Express/ICH10DO chipset. This chipset contains a feature called Intel Quiet System Technology (Intel QST), which is a part of the Intel Management Engine (Intel ME) running on an embedded co-processor. Intel describes QST as follows: The Intel Management Engine (ME) hosts a firmware subsystem – Intel Quiet System Technology (QST) – that provides support for the monitoring of temperature, voltage, current and fan speed sensors that are provided within the Chipset, the Processor and other devices on the Motherboard. For each sensor, a Health Status, based upon established thresholds, will be determined at regular intervals. Intel QST also provides support for acoustically-optimized fan speed control. Based upon readings obtained from the temperature sensors, Intel QST will determine, over time, the optimal speeds at which to operate the available cooling fans, in order to address existing thermal conditions with the lowest possible acoustic impact. The Intel ICH10 datasheet states: 5.24 Intel® Quiet System Technology (Intel® QST) The ICH10 implements three PWM and 4 TACH signals for Intel Quiet System Technology (QST). Note: Intel Quiet System Technology functionality requires a correctly configured system, including an appropriate (G)MCH with Intel ME, Intel ME Firmware, and system BIOS support. It goes on to describe the PWM Outputs, TACH Inputs and Thermal Sensors. This article claims that a Linux driver for Intel QST was available in December 2012: Earlier this year there was early support for Intel QST in LM_Sensors while being announced now is a new Intel QST driver for Linux. The code for this new Quiet System Technology driver is currently on GitHub. The above-mentioned code was not actually in github, but rather on a privately hosted git repository (http://mose.dyndns.org/mei.git) that used the defunct dyndns.org service. I have spent some time looking through the Linux kernel source (v4.16.7) but so far, I haven't found any trace of this driver. Was Intel QST support ever included in the Linux kernel? If so, which driver/kernel module(s) are required for Intel QST support?
This answer documents definitive information on Linux support for Intel QST, which was assembled by tracking down archives of the defunct lm-sensors mailing list and directly contacting the authors of some of those messages. The information here is organized in chronological order of the development of Linux QST support. History of Linux QST Support In February 2010, the Intel QST SDK was made publicly available. A June 2011 Intel forum post later mentioned that the HECI driver from www.openamt.org was no longer needed to run the SDK. A February 2012 message on the lm-sensors mailing list showed the kind of information available via a modified version of the Intel QST SDK (the "gigaplex version"), and indicated that hwmon QST support would be welcome, if it could be implemented without relying on the QST SDK: Fan Speed Sensor 1: Health: Normal Usage: Processor Thermal Module Fan Reading: 1063 NonCrit: 300.000 Crit: 250.000 NonRecov: 200.000 Fan Speed Controller 1: Health: Normal Usage: Processor Fan Controller Control: Manual Duty Cycle: 2.95 If someone finds the time to dig through the SDK and write a hwmon driver, I would be happy to review and test it. That looks like a major effort, though, since it looks like at least some of the SDK code would have to be ported to run in the kernel. By December 2012, someone had actually developed just such a driver, as evidenced in this message on the LKML: I've written a driver for the Intel Quiet System Technology (QST) function of the Management Engine Interface found on recent Intel chipsets. The module was originally developed for Linux 2.6.39, was named qst-hwmon, and provided support for QST v1 by implementing an entire mei driver from scratch. There was further discussion about a second module qst2-hwmon that would implement support for QST v2. A March 2013 note on the hwmon hardware support page indicates that all known attempts to implement Linux support for Intel QST had apparently stalled: (2013-03-20) The ICH8 (82801H) and several later Intel south bridges have embedded sensors, named MEI or QST. These are not yet supported, due to a lack of technical documentation and support from Intel. The OpenAMT project is supposed to help, but in practice not much is happening. Or maybe there is some hope? Or here, or here. However, a November 2014 bug report by the original developer of qst-hwmon indicated that the driver was still being worked on as late as November 29, 2014, and that it had been ported to Linux 3.14.18. Current State of Linux QST Support The qst-hwmon kernel module I finally managed to track down the current location of the git repository for the kernel module. To get a copy of the source code: git clone http://eden.mose.org.uk/mei.git This kernel module has not yet made it into the main Linux kernel source (as of kernel 4.19). The code compiles cleanly for Linux 4.16.7, producing 4 modules, which should be copied to the appropriate modules directory: make cp intel-mei.ko /lib/modules/4.16.7/kernel/drivers/hwmon/ cp mei-pci.ko /lib/modules/4.16.7/kernel/drivers/hwmon/ cp qst-dev.ko /lib/modules/4.16.7/kernel/drivers/hwmon/ cp qst-hwmon.ko /lib/modules/4.16.7/kernel/drivers/hwmon/ And update the module dependencies: depmod Then the modules may be loaded: modprobe intel-mei modprobe mei-pci modprobe qst-dev modprobe qst-hwmon And then you can verify that the /sys/bus/intel-mei/devices/ folder contains some relevant entries. This is not currently working for me, but I believe it is due to having the default Intel MEI driver compiled into the kernel. Further work will be needed to get lm_sensors to detect the qst_hwmon driver. The above mailing list archives indicate that lib-sensors may need to be patched to properly identify the intel-mei bus provided by these modules. Update: I'm in contact with the developer of the driver, so I hope to get the definitive instructions documented here soon. Alternative Approach using Intel QST SDK and meifand Here is a writeup (December 2015) on controlling fans via the "gigaplex version" of the Intel QST SDK (February 2012), and using meifand (not lm-sensors) as a daemon process to access the sensor information.
What is the state of Linux kernel support for Intel Quiet System Technology (Intel QST)?
1,481,190,816,000
I am facing issues with Thinkpad T410's fan. Now and then my fan stops working (meaning 0 rpm). Long time, the only solution for me was to shutdown (not to restart, instead power off) the system and then boot again. (Which is, what you can guess, not a good solution. Sometimes I even had to cancel cpu-heavy tasks, to avoid data corruption, because of the systems rescue shutdown, when the temp goes higher than 100 °C.) I found out that going in suspend mode also helps bringing the fan back to work. I would like to know which processes are started when the computer is "coming back" from suspend mode, so I can force the fan to start again without even going to suspend mode. To make sure: I don't want to control the fan itself, but I want it to "restart" manually.
Using the following line allows me to restart the fan without suspending my laptop. echo disable | sudo tee /proc/acpi/ibm/fan; sleep 5; echo enable | sudo tee /proc/acpi/ibm/fan Thanks to @Stephen Harris
Restart fan manually in Linux
1,481,190,816,000
I'm trying to lock RPM of my AMD Radeon videocard fans at the full speed: echo 1 > /sys/class/hwmon/hwmon1/pwm1_enable echo 255 > /sys/class/hwmon/hwmon1/pwm1 What I have tried so far Obviously, it doesn't work due to missing permissions (even with sudo/root) because it is /sys: $ sudo su $ echo 255 > /sys/class/drm/card1/device/hwmon/hwmon1/pwm1 bash: echo: write error: Invalid argument I have also tried sysfs config to edit these params but it didn't work: $ cat /etc/sysfs.conf class/drm/card1/device/hwmon/hwmon1/pwm1 = 255 class/drm/card1/device/hwmon/hwmon1/pwm1_enable = 1 echo 5 | sudo tee ... also doesn't work. Neither does sudo sh -c: sudo sh -c 'echo 225 > /sys/class/drm/card1/device/hwmon/hwmon1/pwm1' sh: 1: echo: echo: I/O error Archilinux Wiki states it should be possible though https://wiki.archlinux.org/index.php/fan_speed_control#Configuration_of_manual_control They edit values directly with echo and looks like it works for them. Another guide also recommends configuring fans this way https://linuxconfig.org/overclock-your-radeon-gpu-with-amdgpu Python amdgpu-fan package also doesn't work for me. sudo fancontrol doesn't work as well: $ sudo fancontrol Loading configuration from /etc/fancontrol ... Common settings: INTERVAL=10 Settings for hwmon1/pwm1: Depends on hwmon1/temp1_input Controls MINTEMP=10 MAXTEMP=60 MINSTART=50 MINSTOP=0 MINPWM=0 MAXPWM=255 AVERAGE=1 Enabling PWM on fans... Starting automatic fan control... /usr/sbin/fancontrol: line 649: echo: write error: Invalid argument Error writing PWM value to /sys/class/hwmon/hwmon1/pwm1 Aborting, restoring fans... Verify fans have returned to full speed Daemon (service) also doesn't work: fancontrol[1877]: MAXPWM=255 fancontrol[1877]: AVERAGE=1 fancontrol[1877]: Enabling PWM on fans... fancontrol[1877]: Starting automatic fan control... fancontrol[1877]: /usr/sbin/fancontrol: line 649: echo: write error: Invalid argument fancontrol[1877]: Error writing PWM value to /sys/class/hwmon/hwmon1/pwm1 fancontrol[1877]: Aborting, restoring fans... fancontrol[1877]: Verify fans have returned to full speed systemd[1]: fancontrol.service: Main process exited, code=exited, status=1/FAILURE systemd[1]: fancontrol.service: Failed with result 'exit-code'. To sum up: seems that I can't edit /sys/ amdgpu-related entries at all Part 2 It seems that there has to be another way around, like some amdgpu config or something like that. Maybe override kernel-defined values during boot? In Windows, it's possible to tune fans directly from the AMD Radeon driver GUI app. I don't want fancy curves, I'm simply trying to force lock static RPM (full-on mode). I'm using amdgpu-pro drivers, Ubuntu 20.04. I'd like to avoid using scripts like fancontrol The question itself I wonder if that's possible just to set pwm1_enable to 1 and pwm1 to 255? Looks like the suggested method should be working, but Ubuntu 20.04 security limitations are more restrictive than other distros' ones. update This thing works! But only for 1-2 seconds, after that, fans go back to system-defined speed https://github.com/DominiLux/amdgpu-pro-fans/blob/master/amdgpu-pro-fans.sh update 2 Disabling pwm works for about 1-2 seconds. echo 0 > /sys/class/hwmon/hwmon1/pwm1_enable But after that, some daemon reverts this value back to 2. How could I prevent it from changing by other users except me? E.g. prevent it from changing by the system?
In case anyone is interested the solution I made and the corresponding systemd service is here: redfan https://github.com/nmtitov/redfan So far my best guess is to write the following script and keep it always running in the background: while sleep 1; do echo 0 > /sys/class/drm/card1/device/hwmon/hwmon1/pwm1_enable; done Every second I "disable" pwm and make fans running at max speed. The driver (or something else) restores the value, but the next second I immediately disable it again.
How to lock fan speed for amd gpu in Ubuntu 20.04?
1,481,190,816,000
I have a strange (and annoying) problem with my ASUS UX32LA notebook: I suspect that multiple Linux distros (Ubuntu 16.04 and newer major versions using the 4.18.0-17-generic kernel, plus Fedora 28 LiveUSB), somehow breaks BIOS control over fans. I also tried upgrading Ubuntu to 5.x kernel line. Every attempt was followed by a BIOS re-flash. No attempt brought success. Why? One day (don't remember when, not using this notebook very often) fans started to work at full speed when using Linux (it was 4.x line kernel). First I though of trying reboot to Windows to see if problem persists. It turned out that indeed fans were still at full speed on Windows. Then I though that it may be a hardware issue. I gave computer to the repair and I was amused when they told me that re-flashing BIOS made fans work properly on Windows (they refused to check on Linux...). No hardware issue. Having computer back I operated Windows for some time (week or so) to see if the issue is gone. It was. After booting into Linux the issue was back as soon as CPU temperature rose enough for the fan to turn on. It turned on to full speed and remained so. After that it's not important if it's Linux or Windows. Fans at full speed. Re-flashing BIOS solves the issue as long as Linux is not booted. On Windows fans work normally. I don't expect easy answer here, but maybe someone could give me a hint where to start debugging? I found some posts that it may be related to the Differentiated System Description Table...
Please check the BIOS setting before installing Linux. The common BIOS optiona for Linux installation is to enable CSM support and select UEFI and legacy support option in the boot device control. Then, in the secure boot option, set the os type to other os.
Multiple Linux distros break BIOS fan control?
1,481,190,816,000
I was wondering how Linux could handle a Gamer Computer, so I have built one, but as we know GeForce does not like Linux so much as AMD, that is why I choose the last. I built up a computer with AMD Ryzen 7 1800X CPU and Radeon RX 560D GPU, as the Vega is too expensive for me to purchase, and the benchmarking said 560 is the best cost-benefit ratio currently. After some research I discovered the suffix D means it has slightly less clock speed in order to save some power consumption in comparison with RX560 without D. After countless crashes during random gaming I finally found out the problem is the GPU overheating, it's fan speed tends to follow the CPU fan speed, but of course the CPU is much less required than the GPU in some games. I partially solved the problem by customizing the fan speed based on GPU temperature instead of CPU, it is now growing gradually, and achieves the maximum speed on 50 Celsius degrees, but the problem is: on some games it holds on maximum speed all the time, and eventually still crashes. Describing the crash: the screen blinks and then became black, GPU fan stops, keyboard led blinks and then turn off, mouse the same, other CPU fan keeps, sometimes the system keeps frozen forever, sometimes the system auto reboot. As a reboot is required I could not find any tip on system logs, initially I though it was a kernel panic, but even using kdump and duplicating the kernel the system stills crashes the way I could not recover it. I do not know if Windows would have the same problem, but I strongly believe does not, I have never seen someone with the same problem on Windows, so my question is: there is some way to tell the kernel to make GPU take it easy when it is about to overheat, maybe just auto reducing the GPU clock speed?
I found the solution, there are some files on /sys/class/drm/card0/device the file pp_dpm_mclk indicates GPU memory clock, and the file pp_dpm_sclk indicates GPU core clock, mine: $ egrep -H . /sys/class/drm/card0/device/pp_dpm_* /sys/class/drm/card0/device/pp_dpm_mclk:0: 300Mhz /sys/class/drm/card0/device/pp_dpm_mclk:1: 1500Mhz * /sys/class/drm/card0/device/pp_dpm_pcie:0: 2.5GB, x8 * /sys/class/drm/card0/device/pp_dpm_pcie:1: 8.0GB, x16 /sys/class/drm/card0/device/pp_dpm_sclk:0: 214Mhz * /sys/class/drm/card0/device/pp_dpm_sclk:1: 481Mhz /sys/class/drm/card0/device/pp_dpm_sclk:2: 760Mhz /sys/class/drm/card0/device/pp_dpm_sclk:3: 1000Mhz /sys/class/drm/card0/device/pp_dpm_sclk:4: 1050Mhz /sys/class/drm/card0/device/pp_dpm_sclk:5: 1100Mhz /sys/class/drm/card0/device/pp_dpm_sclk:6: 1150Mhz /sys/class/drm/card0/device/pp_dpm_sclk:7: 1196Mhz And the file power_dpm_force_performance_level indicates the profile, which can be low, auto or manual, the default is auto, when low it runs always on lowest clock, which is not exactly what I want, so I set it to manual and made a script that keeps changing the clock according the GPU temperature, voilà, it worked! To change the clock on manual profile just write a number to file pp_dpm_sclk that represents the line, starting with 0, in my case till 7. If you are interested on my script here is it.
How to prevent GPU from overheating and auto turning off
1,481,190,816,000
I've to manage 50 workstations of different brands, mostly DELL T76** and HP Z800. They are all installed on CentOS 7.4. I would like to have a tool to test fan speed in command line, not only CPU fans. Is there such a general tool or will it always be depending on the motherboard?
I've solve this using lm_sensors as suggested by @bananguin. Here is an explanation for the basic usage: To install on CentOS: sudo yum install lm_sensors To set this up: sudo sensors-detect This will detect and setup the various hardware. The "safe" answers are the defaults. Finally, using sensors command will display the value for the previously detected hardware. Also, I'm using Watch command to check live updates. Fancontrol is based on pwm-capable sensors and it's apparently not my case.
Is there a general command line tool to manage fan speed?
1,481,190,816,000
How can I set fans speed to 100% or more in linux ?
Be aware that fiddling around with the fan speed can overheat your machine and kill components! Anyway, the ArchLinux wiki has a page describing how to setup lm-sensors and fancontrol to achieve speed control.
Setting processor fan to 100%
1,481,190,816,000
How can I adjust fan speed according to hard drive temperature via Fancontrol?
I finally found a simple script to control fan speed according to hard drive temperature via Fancontrol, Hddtemp, and Lm-sensors. In the following script, “/dev/sda” is the hard disk to be monitored, and “/Fancontrol/Hddtemp” is the output file to be read by Fancontrol. Press Ctrl + Alt + T to open Terminal and run the following command to check whether “/dev/sda” is the correct one: sudo hddtemp /dev/sd[a-z] Use only the one supported by Hddtemp, which will display the temperature rather than “S.M.A.R.T. not available”. Replace “/dev/sda” with the correct one in the script if necessary. If you have not yet configured Fancontrol, see this page, this page, and this page and run the following commands one by one (restart Linux after running the first one): sudo sensors-detect watch sensors sudo pwmconfig sudo service fancontrol start Then, go through the procedure below: (1) Run the following command to create a script file. sudo mkdir -p "/Fancontrol/" & sudo xed /Fancontrol/HDD_temp (2) Copy the following script into the file and save it. #!/bin/bash File=/Fancontrol/Hddtemp while true do temperature=$(sudo hddtemp -n /dev/sda) echo $(($temperature * 1000)) > "$File" sleep 30 done (3) Run the following command to make it executable. sudo chmod +x /Fancontrol/HDD_temp (4) Run the following command to create a service file. sudo xed /lib/systemd/system/HDD_temp.service (5) Copy the following lines into the file and save it. [Service] ExecStart=/Fancontrol/HDD_temp [Install] WantedBy=multi-user.target (6) Run the following commands one by one: sudo chmod 664 /lib/systemd/system/HDD_temp.service sudo systemctl daemon-reload sudo systemctl start HDD_temp.service sudo systemctl enable HDD_temp.service Then, the script “HDD_temp” will be run as a system service at Linux startup. (7) Run the following command to edit “fancontrol”, the configuration file. sudo xed /etc/fancontrol Find the line that begins with “FCTEMPS”. For example: FCTEMPS=hwmon1/pwm1=hwmon1/temp1_input On that line, “hwmon1/temp1_input” is the temperature (e.g. the chipset temperature) currently read by Fancontrol. Replace it with “/Fancontrol/Hddtemp”, and the line will become: FCTEMPS=hwmon1/pwm1=/Fancontrol/Hddtemp Save the file and run the following command to restart Fancontrol. sudo service fancontrol restart Then, the fan controlled by “hwmon1/pwm1” will respond to “/Fancontrol/Hddtemp”, the hard disk temperature. Note that "HDD_temp" and "Hddtemp" are the script file and output file respectively. Don't confuse them.
Adjust fan speed via Fancontrol according to hard disk temperature (Hddtemp)
1,481,190,816,000
Hi I have a laptop (Fujitsu Siemens Amilo 4000), I'd like to control the cooling fan manually. How do I do that? /proc/acpi/fan/ is empty, the fan is otherwise working well. Distro is Fedora 14.
The fujitsu_laptop module dans control acpi for Fujitsu-Siemens laptops does not appear to have fan control code (as of today) see: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=drivers/platform/x86/fujitsu-laptop.c (You can look at the thinkpad acpi code in the same directory, it has a fan subdriver) I don't think it's possible to achieve what you want to do with your hardware.
How to deliberately activate cooling fan of laptop?
1,481,190,816,000
How may I slow down or turn off my fan in Linux Mint Debian? In Windows 7, it had a function called 'System Cooling Policy' and I had set for passive cooling, so the laptop's fan wouldn't spin up. Just using a simple code studio makes the fan speed up a lot and it's super loud. Windows 7 had this function, and I really liked it, but I haven't found a similar function in my Linux Mint Debian build.
Note before starting: This functionality depends on both your hardware and software. If your hardware doesn't support fan speed controls, or doesn't show them to the OS, it is very likely that you could not use this solution. If it does, but the software (aka kernel) doesn't know how to control it, you are without luck. Install the lm-sensors and fancontrol packages. Configure lm-sensors In terminal type sudo sensors-detect and answer YES to all YES/no questions. (Potentially, this can damage your system or cause system crash. For a lot of systems, it is safe. There is no guarantee that this process will not damage your system permanently, I just think that chance of such critical failure is really really low. Saving all your work for eventual crashes/freezes/restarts before handling system configuration is always good idea. If you feel unsure, read the comments and try to search a web and get some high-level overview before YES-ing everything, maybe being selective with your YES-es will still be enough) At the end of sensors-detect, a list of modules that need to be loaded will be displayed. Type "yes" to have sensors-detect insert those modules into /etc/modules, or edit /etc/modules yourself. Run sudo service module-init-tools restart. This will read the changes you made to /etc/modules in step 3, and insert the new modules into the kernel. Note: If you're running Ubuntu 13.04 or higher, this 3rd step command should be replaced by sudo service kmod start. Configure fancontrol In terminal type sudo pwmconfig . This script will stop each fan for 5 seconds to find out which fans can be controlled by which PWM handle. After script loops through all fans, you can configure which fan corresponds to which temperature. You will have to specify what sensors to use. This is a bit tricky. If you have just one fan, make sure to use a temperature sensor for your core to base the fancontrol speed on. Run through the prompts and save the changes to the default location. Make adjustments to fine-tune /etc/fancontrol and use sudo service fancontrol restart to apply your changes. (In my case I set interval to 2 seconds.) Set up fancontrol service Run sudo service fancontrol start. This will also make the fancontrol service run automatically at system startup. In my case /etc/fancontrol for CPU I used: Settings for hwmon0/device/pwm2: (Depends on hwmon0/device/temp2_input) (Controls hwmon0/device/fan2_input) INTERVAL=2 MINTEMP=40 MAXTEMP=60 MINSTART=150 MINSTOP=0 MINPWM=0 MAXPWM=255 and on a different system it is: INTERVAL=10 DEVPATH=hwmon1=devices/platform/coretemp.0 hwmon2=devices/platform/nct6775.2608 DEVNAME=hwmon1=coretemp hwmon2=nct6779 FCTEMPS=hwmon2/pwm2=hwmon1/temp2_input FCFANS=hwmon2/pwm2=hwmon2/fan2_input MINTEMP=hwmon2/pwm2=49 MAXTEMP=hwmon2/pwm2=83 MINSTART=hwmon2/pwm2=150 MINSTOP=hwmon2/pwm2=15 MINPWM=hwmon2/pwm2=14 MAXPWM=hwmon2/pwm2=255 here is some useful info on the settings and what they really do Source:https://askubuntu.com/questions/22108/how-to-control-fan-speed On reduce overheating: TLP TLP is my favorite power management tool in Linux. It’s a daemon that is pre-configured to reduce overheating as well as improve battery life. You just need to install TLP and restart your system. It will be auto-start at each boot and keep on running in background. I have always included installation of TLP in top things to do after installing Ubuntu for its simplicity and usefulness. To install TLP in Ubuntu based Linux distributions, use the following commands: sudo add-apt-repository ppa:linrunner/tlp sudo apt-get update sudo apt-get install tlp tlp-rdw If you are using ThinkPads, you require an additional step: sudo apt-get install tp-smapi-dkms acpi-call-dkms Restart your system after installation. Check this page for installation instructions in other Linux distributions. You may start to feel the difference in few hours or in couple of days. To uninstall TLP, you can use the following commands: sudo apt-get remove tlp sudo add-apt-repository --remove ppa:linrunner/tlp Source:https://itsfoss.com/reduce-overheating-laptops-linux/ Officially supported Ubuntu releases; as well as corresponding Linux Mint releases; but not LMDE (see Debian) Package Repository Add the TLP-PPA to your package sources: See above commands Debian Debian oldstable, stable, testing and unstable; as well as Linux Mint Debian Edition (LMDE) Execute the following steps in a root shell. Package Repository Debian stable, testing and unstable TLP and ThinkPad-related packages below are available via the official Debian repository. Note: due to the pending freeze of Debian 10.0 "Buster", the maintainer is currently not allowed to provide packages >= 1.2 in testing (Buster) and in stable (Stretch), oldstable (Jessie) via backports. Please download and install from unstable: tlp, tlp-rdw. Debian 9.0 "Stretch" TLP packages for the newest version are available via Debian Backports (read more). Add the following line to your /etc/apt/sources.list: deb http://ftp.debian.org/debian stretch-backports main Debian 8.0 "Jessie" TLP packages are available via Debian Backports only (read more). Add the following line to your /etc/apt/sources.list: deb http://ftp.debian.org/debian jessie-backports-sloppy main Update package data: apt-get update Package Install Install the following packages: tlp (main) – Power saving tlp-rdw (main) – optional – Radio Device Wizard tp-smapi-dkms (main) – optional, ThinkPad only – provides battery charge thresholds, recalibration and specific status output in tlp-stat for older ThinkPads acpi-call-dkms (main) – optional, ThinkPad only – provides battery charge thresholds and recalibration for newer ThinkPads (X220/T420 and later) The above packages may be installed via package management tools or with a terminal command: apt-get install tlp tlp-rdw For Debian Backports use: apt-get install -t stretch-backports tlp tlp-rdw or apt-get install -t jessie-backports-sloppy tlp tlp-rdw instead. For ThinkPads an additional apt-get install tp-smapi-dkms acpi-call-dkms Source: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
How to change the system cooling policy on Linux Mint Debian laptop
1,481,190,816,000
Here I am repeating a question previously asked in a sister forum, as it is relevant here and I have neither received a response, nor been able to resolve the issue. On my ThinkPad T470 which is a dual boot with Linux Ubuntu 18.04 and Windows 10, everything was working fine in Ubuntu until after a while I needed to boot Windows. Since then, the fan on the laptop runs constantly at full speed on Ubuntu. I have tried the common solutions such as setting acpi_osi=!Windows 2012 in the grub setting according to this answer or setting fan speed using thinkfan according to this answer. I have also checked my BIOS setting, but every thing looks normal as some options are set for performance and some set to be balanced between performance, energy consumption, and fan noise. The problem is Ubuntu seems to not recognize the BIOS settings or any other settings for that matter. None of the solutions above made any difference in the fan noise. Any help would be appreciated. GUESS: I am suspicious about ACPI not doing its job for some reason. OBSERVATION 1: One observation that may be worth mentioning is that the fan runs at normal/low speed when I boot the laptop and the grub menu prompts me to choose an operating system (Ubuntu or Windows) to continue with. Then the fan takes off to full speed as I choose Ubuntu. I think this means that BIOS settings work fine. OBSERVATION 2: Trying to use fancontrol according to this answer, after running sudo pwmconfig, I get the following message: hwmon3/pwm1_enable stuck to 2 Manual control mode not supported, skipping hwmon3/pwm1. There are no usable PWM outputs. EDIT 1: The power settings in Ubuntu doesn't seem to alter fan speed. EDIT 2: Fan runs normally on Windows. EDIT 3: The BIOS version on my machine is 1.59
It sounds like you are going to have to do some manual intervention to get ACPI working properly with your hardware https://github.com/vmatare/thinkfan/ echo "options thinkpad_acpi fan_control=1" > /etc/modprobe.d/thinkfan.conf Load the module like this. $ su # modprobe thinkpad_acpi # cat /proc/acpi/ibm/fan Then enable the module Systemctl enable thinkfan You will need to configure temp profile by editing the /etc/thinkfan.conf Examples are provided as thinkfan.conf.simple Good luck
Fan constantly running at full speed
1,481,190,816,000
I am running DELL Vostro 3750, operating system Linux Mint and despite all my attempts the fan is still running very fast and loud. So far I have tried first edit Grub according to this article. Than I tried to install nvidia drivers according to this manual but all got was a screen telling me that the X screen can't be started because no X devices were found. So I had to delete /etc/X11/xorg.conf file. Before I had ubuntu 12.04 and the problems were the same. There I also tried Jupiter and the bumlebee project. But no results. This is my computer's information: System: Host ntb Kernel 3.2.0-4-amd64 x86_64 (64 bit) Distro Linux Mint Debian Edition CPU: Quad core Intel Core i7-2670QM (-HT-MCP-) cache 6144 KB flags (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips 17543.7 Clock Speeds: (1) 800.00 MHz (2) 800.00 MHz (3) 800.00 MHz (4) 800.00 MHz (5) 800.00 MHz (6) 800.00 MHz (7) 800.00 MHz (8) 800.00 MHz Graphics: Card-1 Intel 2nd Generation Core Processor Family Integrated Graphics Controller Card-2 NVIDIA GF108 [GeForce GT 540M] X.Org 1.12.4 Res: [email protected] GLX Renderer N/A GLX Version N/A Direct Rendering N/A Audio: Card Intel 6 Series/C200 Series Chipset Family High Definition Audio Controller driver snd_hda_intel BusID: 00:1b.0 Sound: Advanced Linux Sound Architecture Version 1.0.24 Network: Card-1 Realtek RTL8111/8168B PCI Express Gigabit Ethernet controller driver r8169 v: 2.3LK-NAPI at port 3000 BusID: 04:00.0 Card-2 Intel Centrino Wireless-N 1030 driver iwlwifi v: in-tree: BusID: 02:00.0 Disks: HDD Total Size: 750.2GB (-) 1: /dev/sda ST9750420AS 750.2GB Partition: ID:/ size: 49G used: 7.6G (17%) fs: rootfs ID:/ size: 49G used: 7.6G (17%) fs: ext4 Sensors: System Temperatures: cpu: 65.0C mobo: 65.0C Fan Speeds (in rpm): cpu: N/A EDIT: Ever since I asked this question I, time to time, tried to look for some complex solution to this problem. Eventually I hit on THIS absolutely perfect answer that helped me solve all my troubles with overheating and fan noise. Just follow the instructions and you will be fine. Especially I recommend using indicator-cpufreq.
The nvidia drivers will not affect the fan. You might want to not use the NVidia card though. The integrated graphics card will give out less heat. In fact, I would disable the dedicated card in the BIOS, you almost certainly do not need it. In any case, what you will need is to install i8kutils. This package will install certain modules and programs that are specific for Dell fans. sudo apt-get install i8kutils modprobe i8k i8kfan 0 1 You can play with i8kfan to see that the settings are correctly read and applied. If it does, add i8k to /etc/modules. You should also choose the ondemandCPU scaling governor. The governor controls CPU frequency scaling. Your choices are: Performance keeps the CPU at the highest possible frequency Powersave keeps the CPU at the lowest possible frequency Userspace exports the available frequency information to the user level (through the /sys file system) and permits user-space control of the CPU frequency Ondemand scales the CPU frequencies according to the CPU usage (like does the userspace frequency scaling daemons, but in kernel) Conservative acts like the ondemand but increases frequency step by step With ondemand, your CPU will only run at its highest speed when necessary. Ideally, this will be completely transparent for you, you machine will simply work as fast as necessary for the current tasks. To activate it do sudo echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Despite all attempts fan is still running very loudly and fast
1,481,190,816,000
I'm using a sony vaio pro 13 with linux, but unfortunately the fan is already at maximum at only 50 degrees Celsius. How can I change the threshold it sets in? Distro is arch btw.
Everything you need should be covered in the Arch Linux wiki article titled Fan speed control. excerpt Once sensors is properly configured, run pwmconfig to test and configure speed control. Follow the instructions in pwmconfig to set up basic speeds. The default configuration options should create a new file, /etc/fancontrol. You can augment the fan parameters in that config file. For example: INTERVAL=10 DEVPATH=hwmon0=devices/platform/coretemp.0 hwmon2=devices/platform/w83627ehf.656 DEVNAME=hwmon0=coretemp hwmon2=w83627dhg FCTEMPS=hwmon0/device/pwm1=hwmon0/device/temp1_input FCFANS= hwmon0/device/pwm1=hwmon0/device/fan1_input MINTEMP=hwmon0/device/pwm1=20 MAXTEMP=hwmon0/device/pwm1=55 MINSTART=hwmon0/device/pwm1=150 MINSTOP=hwmon0/device/pwm1=105 breakdown MINTEMP: The temperature (°C) at which to SHUT OFF the CPU fan. Efficient CPUs often will not need a fan while idling. Be sure to set this to a temperature that you know is safe. Setting this to 0 is not recommended and may ruin your hardware! MAXTEMP: The temperature (°C) at which to spin the fan at its MAXIMUM speed. This should be probably be set to perhaps 10 or 20 degrees (°C) below your CPU's critical/shutdown temperature. Setting it closer to MINTEMP will result in higher fan speeds overall. MINSTOP: The PWM value at which your fan stops spinning. Each fan is a little different. Power tweakers can echo different values (between 0 and 255) to /sys/class/hwmon/hwmon0/device/pwm1 and then watch the CPU fan. When the CPU fan stops, use this value. MINSTART: The PWM value at which your fan starts to spin again. This is often a higher value than MINSTOP as more voltage is required to overcome inertia.
How can I configure the threshold when my fan sets in (laptop)
1,329,337,905,000
I cant measure how hot it gets during boot but then in desktop after launch it cools down rapidly NVIDIA rtx 3060 ti
This is normal, yes, within reason. After reset, no power-saving mechanisms are enabled. They are enabled later on, usually by the driver loaded by the operating system.
Is it normal for GPU to overheat during boot with fans running max
1,329,337,905,000
I accidentally ended up using the Nouveau driver (as opposed to the proprietary NVIDIA driver) for my GPU today and was surprised by how well it worked. I am aware of the reclocking issue (that is, that the clock speeds are stuck low). Regardless, I'm considering switching to primarily using it, but I have one significant issue preventing me from doing so: my GPU's fans. When using Nouveau they constantly spin at almost 2000 RPM despite the card not being particularly warm (according to lm-sensors) and as a result are very loud. I would like to set the fan curve to something more reasonable. How might I do this in Linux when using the Nouveau GPU driver? Worth noting is that I have a GTX 970 which according to this matrix has support for controlling the fan speed: https://nouveau.freedesktop.org/PowerManagement.html (edit: never mind, the GTX 970 is one generation too new to support this due to firmware issue)
ERROR: type should be string, got "\nhttps://wiki.archlinux.org/index.php/nouveau#Fan_control\nAs for the fan curve, man fancontrol :\nhttps://wiki.archlinux.org/index.php/fan_speed_control#Fancontrol_(lm-sensors)\n"
How do I set my GPU's fan curve when using Nouveau?
1,329,337,905,000
I tried to find a solution but this whole situation drives me crazy, especially with a server next to with three fans and all run on max with no obvious reason. He seems to have the same problem sensors-detect not detecting my fan. I have a HP ProLiant server with a quad core CPU and centOS. I did the following steps: sudo yum install lm_sensors sudo sensors-detect and confirmed everything with yes sensors Output of sensors acpitz-virtual-0 Adapter: Virtual device temp1: +8.3°C (crit = +31.3°C) Here you can already see that something is wrong. Why do I just see one temperature and nothing else? Shouldn't there be the fans and the temperature of the other CPUs? sudo pwmconfig Output /sbin/pwmconfig: There are no pwm-capable sensor modules installed I tried to install fancontrol with sudo yum install fancontrol but this package does not exist. sudo fancontrol Output Loading configuration from /etc/fancontrol ... Error: Can't read configuration file In the BIOS I did not find anything to control the fan speed, like disable fans always on or something. Maybe that is also a way? I have literally no idea what to do and any help is highly appreciated. Please note, that I am new to this and please tell me about assumptions you made and how to install any required packages. As a beginner it is super frustrating if anybody says do this and that but does not tell where the file is or how to get it. Please let me know when you need any further information. Regards, René
Welcome to Unix & Linux StackExchange! HP uses a proprietary fan controller system in their servers, that is not supported by lm-sensors at all. In some models, the fan control is partially relegated to software, with a hardware failsafe: if the appropriate drivers are not communicating with the fan control hardware, the fans will go to full speed and stay that way until the drivers are installed and running. In 2015, the Hewlett-Packard company was split in two: the enterprise IT unit became HPE (Hewlett-Packard Enterprise), and consumer-grade hardware was left under the main HP brand. ProLiant servers and their support are now provided by HPE, even if the server was originally sold under the main HP brand. You should go to https://www.hpe.com and select Support -> Support Center. In there, you can type in your server model to a search field (e.g. "ProLiant DL380 G7), and get to a page where you can select the exact model you have (in case the model name you specified was ambiguous), the operating system you're using, and that you're looking for downloads. (You can get the exact model name with dmidecode -s system-product-name.) For CentOS, you can probably use the driver packages intended for the corresponding version of RedHat Enterprise Linux. Different models can have different driver sets, so without knowing your exact model, I cannot give more detailed instructions. But once you get the appropriate drivers up and running, you will usually immediately hear the difference in fan behavior with no configuration needed. Also, if your server is a rack-mount model, be aware that they are designed to pack the maximum amount of computing power to a unit of space, in an air-conditioned server room, not to be easy on the ears. On the question you linked, the problem may be the same, but the underlying cause is likely to be very different: on laptops, the fans are typically controlled by the ACPI firmware. Through the kernel's ACPI features, you might be able to force the fans on, but there is usually no way to prevent laptop fans from running, as without them the system might quickly overheat and be damaged if the processor runs at full speed for any significant length of time - and as a laptop typically contains a lithium-ion battery, overheating it can cause a real risk of fire. Usually the only way to stop a laptop with Linux from running its fans is to use the cpufreq/cpupower commands to restrict the maximum CPU clock speed low enough so that the fans won't be needed.
HP ProLiant Server with centOS no fans detected
1,329,337,905,000
I need to monitor an Lenovo system x3650 m5 (8871) server. Unfortunately lm_sensor just show the CPU temperature. Do anyone have an advice, how I could monitor the fan speed with an commandline tool? Output sensors: sensors power_meter-acpi-0 Adapter: ACPI interface power1: 141.00 W (interval = 1.00 s) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +30.0°C (high = +92.0°C, crit = +102.0°C) Core 0: +21.0°C (high = +92.0°C, crit = +102.0°C) Core 2: +23.0°C (high = +92.0°C, crit = +102.0°C) Core 3: +22.0°C (high = +92.0°C, crit = +102.0°C) Core 4: +22.0°C (high = +92.0°C, crit = +102.0°C) Core 8: +23.0°C (high = +92.0°C, crit = +102.0°C) Core 10: +20.0°C (high = +92.0°C, crit = +102.0°C) Core 11: +21.0°C (high = +92.0°C, crit = +102.0°C) Core 12: +20.0°C (high = +92.0°C, crit = +102.0°C) coretemp-isa-0001 Adapter: ISA adapter Physical id 1: +28.0°C (high = +92.0°C, crit = +102.0°C) Core 0: +22.0°C (high = +92.0°C, crit = +102.0°C) Core 2: +22.0°C (high = +92.0°C, crit = +102.0°C) Core 3: +21.0°C (high = +92.0°C, crit = +102.0°C) Core 4: +21.0°C (high = +92.0°C, crit = +102.0°C) Core 8: +22.0°C (high = +92.0°C, crit = +102.0°C) Core 10: +21.0°C (high = +92.0°C, crit = +102.0°C) Core 11: +21.0°C (high = +92.0°C, crit = +102.0°C) Core 12: +22.0°C (high = +92.0°C, crit = +102.0°C) Output sudo sensors-detect sensors-detect # sensors-detect revision 3.4.0-4 (2016-06-01) # System: LENOVO System x3650 M5: -[8871AC1]- [13] # Board: LENOVO 01KN179 # Kernel: 3.10.0-514.6.1.el7.x86_64 x86_64 # Processor: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz (6/79/1) This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 16h thermal sensors... No AMD Family 15h power sensors... No AMD Family 16h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No Intel 5500/5520/X58 thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x3711 Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x7f00 Some systems (mainly servers) implement IPMI, a set of common interfaces through which system health data may be retrieved, amongst other things. We first try to get the information from SMBIOS. If we don't find it there, we have to read from arbitrary I/O ports to probe for such interfaces. This is normally safe. Do you want to scan for IPMI interfaces? (YES/no): Found `IPMI BMC KCS' at 0xcc0... Success! (confidence 8, driver `to-be-written') Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): Found unknown SMBus adapter 8086:8d22 at 0000:00:1f.3. Sorry, no supported PCI bus adapters found. Module i2c-dev loaded successfully. Next adapter: mga i2c (i2c-0) Do you want to scan it? (yes/NO/selectively): Next adapter: SMBus I801 adapter at 1fe0 (i2c-1) Do you want to scan it? (YES/no/selectively): Client found at address 0x48 Probing for `National Semiconductor LM75'... No Probing for `National Semiconductor LM75A'... No Probing for `Dallas Semiconductor DS75'... No Probing for `National Semiconductor LM77'... No Probing for `Analog Devices ADT7410/ADT7420'... No Probing for `Analog Devices ADT7411'... No Probing for `Maxim MAX6642'... No Probing for `Texas Instruments TMP435'... No Probing for `National Semiconductor LM73'... No Probing for `National Semiconductor LM92'... No Probing for `National Semiconductor LM76'... No Probing for `Maxim MAX6633/MAX6634/MAX6635'... No Probing for `NXP/Philips SA56004'... No Probing for `SMSC EMC1023'... No Probing for `SMSC EMC1043'... No Probing for `SMSC EMC1053'... No Probing for `SMSC EMC1063'... No Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `to-be-written': * ISA bus, address 0xcc0 Chip `IPMI BMC KCS' (confidence: 8) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) Note: there is no driver for IPMI BMC KCS yet. Check http://www.lm-sensors.org/wiki/Devices for updates. Do you want to overwrite /etc/sysconfig/lm_sensors? (YES/no): Unloading i2c-dev... OK Output find /sys/ -iname 'fan' # find /sys/ -iname '*fan*' /sys/bus/platform/drivers/acpi-fan /sys/kernel/slab/fanotify_event_info /sys/kernel/slab/fanotify_perm_event_info /sys/kernel/debug/tracing/events/syscalls/sys_enter_fanotify_init /sys/kernel/debug/tracing/events/syscalls/sys_exit_fanotify_init /sys/kernel/debug/tracing/events/syscalls/sys_enter_fanotify_mark /sys/kernel/debug/tracing/events/syscalls/sys_exit_fanotify_mark /sys/module/rcutree/parameters/rcu_fanout_leaf English is not my native, so please don't judge my spelling errors. I am also not sure, if this is the correct site for this Question.
Your system has a correctly configured BMC with IPMI support, so you should be able to use ipmitool locally to extract all the monitoring information supported by your BMC: yum install ipmitool ipmitool sensor (assuming the ipmi_si module is loaded, which should be the case on RHEL 7 on your setup). The interesting values are in the first two columns (sensor and value), and the fourth (health indicator): CPU Temp | 45.000 | degrees C | ok | 0.000 | 0.000 | 0.000 | 91.000 | 96.000 | 96.000 System Temp | 37.000 | degrees C | ok | -10.000 | -5.000 | 0.000 | 80.000 | 85.000 | 90.000 Peripheral Temp | 43.000 | degrees C | ok | -10.000 | -5.000 | 0.000 | 80.000 | 85.000 | 90.000 MB_10G Temp | 50.000 | degrees C | ok | -5.000 | 0.000 | 5.000 | 95.000 | 100.000 | 105.000 DIMMA1 Temp | 42.000 | degrees C | ok | -5.000 | 0.000 | 5.000 | 80.000 | 85.000 | 90.000 DIMMA2 Temp | na | | na | na | na | na | na | na | na DIMMB1 Temp | 42.000 | degrees C | ok | -5.000 | 0.000 | 5.000 | 80.000 | 85.000 | 90.000 DIMMB2 Temp | na | | na | na | na | na | na | na | na FAN1 | na | | na | na | na | na | na | na | na FAN2 | 3300.000 | RPM | ok | 300.000 | 500.000 | 700.000 | 25300.000 | 25400.000 | 25500.000 FAN3 | 900.000 | RPM | ok | 300.000 | 500.000 | 700.000 | 25300.000 | 25400.000 | 25500.000 FAN4 | na | | na | na | na | na | na | na | na VCCP | 1.830 | Volts | ok | 1.420 | 1.460 | 1.570 | 2.020 | 2.130 | 2.170 VDIMM | 1.182 | Volts | ok | 0.948 | 0.975 | 1.047 | 1.344 | 1.425 | 1.443 12V | 12.000 | Volts | ok | 10.144 | 10.272 | 10.784 | 12.960 | 13.280 | 13.408 5VCC | 4.974 | Volts | ok | 4.246 | 4.298 | 4.480 | 5.390 | 5.546 | 5.598 3.3VCC | 3.333 | Volts | ok | 2.789 | 2.823 | 2.959 | 3.554 | 3.656 | 3.690 VBAT | 3.168 | Volts | ok | 2.385 | 2.472 | 2.588 | 3.487 | 3.574 | 3.690 5V Dual | 4.946 | Volts | ok | 4.244 | 4.298 | 4.487 | 5.378 | 5.540 | 5.594 3.3V AUX | 3.265 | Volts | ok | 2.789 | 2.823 | 2.959 | 3.554 | 3.656 | 3.690 Chassis Intru | 0x0 | discrete | 0x0000| na | na | na | na | na | na
Monitoring CPU fan speed on Lenovo system x3650 m5 (8871) on RHEL7
1,329,337,905,000
I just bought a Clevo N141WU (at system 76 it's known as the galago pro) from a Danish PC shop. It mostly works really nicely, but when the fan is spinning down (after a hard workload) it starts making a really high pitched sound and the fan stops (it sounds like the fan isn't getting the needed voltage to spin). I've called the shop, and their solution was some Windows software, but the PC came without Windows and I bought it to run Linux in the first place (since it was the same as the galago pro I thought it would work). Since the laptop runs Linux from system76, I think it should be doable. Is there anything I should install to make it run more better, or does someone know the bios trick to make the fan happy? I'm running Solus 3.X where x is however many nines you want to spend your time inserting ;-) Using the keyboard shortcut Fn+1 (found in a thread about the loud fans of the system76 galago pro) twice will turn on and off the fan. This will remove the sound until the next hard load is gone. I've found two things since originally posting: system76 has some firmware update, but who knows if they'd be willing to send it to someone with a laptop from another reseller (I'll ask them nicely) System76 has a package in ubuntu called system76-dkms which might provide fan control, but it's not in the Solus repo. (I'll probably ask around in the Solus irc about how packaging works tonight.)
I was successful in Windows 10 with the code below. It handles both failures that the fan can have, being: "fan stops suddenly with fan duty=0" and "fan stops suddenly with rpm > 10000 with a electric noise that can be heard coming from the fan". It requires a program that loads Winring0 such as ThrottleStop running in the background. I have not tested it with Clevo Control Center installed. It compiles with MinGW-w64 with \yourmingwpath\i686-w64-mingw32-gcc.exe \yoursourcepath\main.c -o \yourexepath\main.exe -Wall -mwindows #define UNICODE 1 #define _UNICODE 1 #include <windows.h> #include <winioctl.h> #include <stdio.h> #include <stddef.h> #define OLS_TYPE 40000 #define IOCTL_OLS_READ_IO_PORT_BYTE CTL_CODE(OLS_TYPE, 0x833, METHOD_BUFFERED, FILE_READ_ACCESS) #define IOCTL_OLS_WRITE_IO_PORT_BYTE CTL_CODE(OLS_TYPE, 0x836, METHOD_BUFFERED, FILE_WRITE_ACCESS) #define EC_SC 0x66 #define EC_DATA 0x62 #define IBF 1 #define OBF 0 #define EC_SC_READ_CMD 0x80 typedef struct _OLS_WRITE_IO_PORT_INPUT { ULONG PortNumber; union { ULONG LongData; USHORT ShortData; UCHAR CharData; }; } OLS_WRITE_IO_PORT_INPUT; HANDLE hDevice = INVALID_HANDLE_VALUE; char filename[1024] = {0}; WORD WInp(WORD port) { FILE *outlog; unsigned int error = 0; DWORD returnedLength = 0; WORD value = 0; BOOL bResult = FALSE; bResult = DeviceIoControl(hDevice, IOCTL_OLS_READ_IO_PORT_BYTE, &port, sizeof(port), &value, sizeof(value), &returnedLength, NULL ); if (bResult) { /*outlog = fopen(filename, "ab"); fprintf(outlog, "port=%d, value=%d, retlength=%d\n", port, value, (int)returnedLength); fclose(outlog);*/ return value; } else { error = GetLastError(); outlog = fopen(filename, "ab"); fprintf(outlog, "DeviceIoControl (read) failed. Error %d.\n", error); fclose(outlog); CloseHandle(hDevice); return 0; } } WORD WOutp(WORD port, BYTE value) { FILE *outlog; unsigned int error = 0; DWORD returnedLength = 0; BOOL bResult = FALSE; DWORD length = 0; OLS_WRITE_IO_PORT_INPUT inBuf; inBuf.CharData = value; inBuf.PortNumber = port; length = offsetof(OLS_WRITE_IO_PORT_INPUT, CharData) + sizeof(inBuf.CharData); bResult = DeviceIoControl(hDevice, IOCTL_OLS_WRITE_IO_PORT_BYTE, &inBuf, length, NULL, 0, &returnedLength, NULL); if (bResult) { /*outlog = fopen(filename, "ab"); fprintf(outlog, "port=%d, value=%d, retlength=%d\n", port, value, (int)returnedLength); fclose(outlog);*/ return value; } else { error = GetLastError(); outlog = fopen(filename, "ab"); fprintf(outlog, "DeviceIoControl (write) failed. Error %d.\n", error); fclose(outlog); CloseHandle(hDevice); return 0; } } int wait_ec(const unsigned int port, const unsigned int flag, const char value) { int i = 0; unsigned char data = WInp(port); while (((data >> flag)&0x1)!=value) { Sleep(1); if (i>10) { //printf( "Still waiting on port 0x%x, data=0x%x, flag=0x%x, value=0x%x, i=%d\n", port, data, flag, value, i); return 0; } i++; data = WInp(port); } //printf( "Succeeded port 0x%x, data=0x%x, flag=0x%x, value=0x%x, i=%d\n", port, data, flag, value, i); return 0; } unsigned char read_ec(const unsigned int port) { wait_ec(EC_SC, IBF, 0); WOutp(EC_SC, EC_SC_READ_CMD); wait_ec(EC_SC, IBF, 0); WOutp(EC_DATA, port); wait_ec(EC_SC, OBF, 1); return WInp(EC_DATA); } void do_ec(const unsigned int cmd, const unsigned int port, const unsigned char value) { wait_ec(EC_SC, IBF, 0); WOutp(EC_SC, cmd); wait_ec(EC_SC, IBF, 0); WOutp(EC_DATA, port); wait_ec(EC_SC, IBF, 0); WOutp(EC_DATA, value); wait_ec(EC_SC, IBF, 0); return; } void write_fan_duty(int duty_percentage) { do_ec(0x99, 0x01, (int)(((double) duty_percentage) / 100.0 * 255.0)); //FILE *outlog = fopen(filename, "ab"); //fprintf(outlog, "Fan set to %d\n", duty_percentage); //fclose(outlog); return; } int main(){ // get the path of this executable and append "stdout.txt\0" to it for the log file. int i = GetModuleFileNameA(NULL, filename, 1024); for (;i>0 && filename[i] != '\\';i--) {} char *dest=&filename[i+1], *src="stdout.txt\0"; for (i=0;i<11;i++) dest[i]=src[i]; FILE *outlog; outlog = fopen(filename, "wb"); // clear the log at every start fclose(outlog); unsigned int error = 0; // I could loop CreateFile until a valid handle is returned (which means that WinRing0_1_2_0 got started by throttlestop) // but windows defender blocks the program at start for a few seconds with 100% core usage if i do that. Sleep(3000); // ... so this is what i have to do instead. Disgusting. hDevice = CreateFile(L"\\\\.\\WinRing0_1_2_0", GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (hDevice == INVALID_HANDLE_VALUE) { error = GetLastError(); if (error == ERROR_ACCESS_DENIED) { outlog = fopen(filename, "ab"); fprintf(outlog, "CreateFile failed. Please retry as administrator.\n"); fclose(outlog); } else if (error == ERROR_FILE_NOT_FOUND) { outlog = fopen(filename, "ab"); fprintf(outlog, "CreateFile failed. The WinRing0 driver is probably not loaded yet.\n"); fclose(outlog); } else { outlog = fopen(filename, "ab"); fprintf(outlog, "CreateFile failed. Error %d.\n", error); fclose(outlog); } return 0; } int val_duty, raw_rpm, val_rpm, temp, last_valid_duty=50; while (1) { val_duty = (int) ((double) (read_ec(0xCE)) / 255.0 * 100.0); raw_rpm = (read_ec(0xD0) << 8) + (read_ec(0xD1)); if (raw_rpm == 0) val_rpm = 0; else val_rpm = 2156220 / raw_rpm; temp = read_ec(0x07); //outlog = fopen(filename, "ab"); //fprintf(outlog, "FAN Duty: %d%%, FAN RPMs: %d RPM, CPU Temp: %d°C\n", val_duty, val_rpm, temp); //fclose(outlog); if (val_rpm > 10000 || val_duty == 0) { // there are two malfunctions that can happen: // - fan stops suddenly with fan duty=0 // - fan stops suddenly with rpm > 10000 with a electric noise that can be heard coming from the fan. outlog = fopen(filename, "ab"); fprintf(outlog, "MALFUNCTION DETECTED: val_rpm=%d, val_duty=%d\n", val_rpm, val_duty); fclose(outlog); // Panic :O if (last_valid_duty<80) { write_fan_duty(last_valid_duty+20); } else { write_fan_duty(last_valid_duty-20); } } else { // This is the custom fan curve code. Can be adjusted to your liking. // It's required because i don't know to to set the fan back to "automatic" without manual intervention. // Can definitely conflict with other fan speed programs, so be careful. // Writes to fan speed are limited to only if the target fan duty changes. if (temp<55) { if (last_valid_duty > 32 || last_valid_duty < 29) write_fan_duty(31); } else if (temp<60) { if (last_valid_duty > 42 || last_valid_duty < 39) write_fan_duty(41); } else if (temp<65) { if (last_valid_duty > 52 || last_valid_duty < 49) write_fan_duty(51); } else if (temp<70) { if (last_valid_duty > 62 || last_valid_duty < 59) write_fan_duty(61); } else if (temp<75) { if (last_valid_duty > 72 || last_valid_duty < 69) write_fan_duty(71); } else if (temp<80) { if (last_valid_duty > 82 || last_valid_duty < 79) write_fan_duty(81); } else if (temp<85) { if (last_valid_duty > 92 || last_valid_duty < 89) write_fan_duty(91); } else { if (last_valid_duty < 98) write_fan_duty(100); } last_valid_duty = val_duty; } Sleep(200); } return 0; } I have not ported the code for usage in linux-based oses tho. Doing so would require: replacing WInp(port) and WOutp(port, value) functions with inb(port) and outb(value, port), adding ioperm at the start like in this code snippet, replacing Sleep(milliseconds) with usleep(microseconds), cleaning up all the now useless includes, defines, structs and handles, replacing GetModuleFileNameA with an equivalent function.
Clevo N141WU noise when fan is cooling
1,329,337,905,000
I have a Samsung NP900X3E laptop and I would like to control the fans as one of them is making weird noise. I'm running Ubuntu 14.04.4 LTS with "Linux laptop 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux" sensors reports: root@laptop:/# sensors acpitz-virtual-0 Adapter: Virtual device temp1: +54.0°C (crit = +106.0°C) temp2: +29.8°C (crit = +106.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +53.0°C (high = +87.0°C, crit = +105.0°C) Core 0: +47.0°C (high = +87.0°C, crit = +105.0°C) Core 1: +49.0°C (high = +87.0°C, crit = +105.0°C) sensors-detect reports: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) lspci reports: root@laptop:/# lspci 00:00.0 Host bridge: Intel Corporation 3rd Gen Core processor DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 (rev 04) 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 1 (rev c4) 00:1c.3 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 4 (rev c4) 00:1c.4 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 (rev c4) 00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM75 Express Chipset LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller (rev 04) 01:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) 03:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) Any idea how I could control the two fans? The problem is that when one fan is needed to cool down the CPU, it's always the same one that is used. This fan is "tired" while the other works great when two fans are needed. Ideally, I would like to "permute" in software the usage of the two fans.
Based on your output of sensors, it appears that lm_sensors does not detect any fan speed reading. You should try running sensors-detect and answer yes to all questions to hopefully detect one that wasn't previously configured. If not, then it simply won't be possible to control those fans. The BIOS controls the fans with a PWM, but its control is usually quite limited as is its configuration for it.
Control fans of Samsung NP900X3E
1,329,337,905,000
I'm running Debian Testing on a Dell Latitude 7480. I've been having a lot of freezing issues, and I have finally narrowed it down to an overheating problem. On battery, I can work for 1hr+ with no problem, and sometimes the system will freeze: mouse stops moving, the backlight of the keyboard doesn't power off, I cannot SSH into this machine. On AC power the same occurs after 15-20 minutes after plugging in to power; the bottom of the laptop is quite warm when this happens (not scalding hot, just warmer than it should be). I am currently at this machine on AC and it hasn't frozen after 21 minutes, but I have a USB fan connected to it. The problem is that the fan never starts. I ran watch sensors during the whole session yesterday and the temperature does vary; however, the fan speed always changes to a positive number during a watch cycle (2 seconds) and goes back to zero after one or two; so the system reads a spinning fan for about 2-4 seconds, then it stops, but I never hear it. I know the fan works because I ran the onboard diagnostic tool and the fan not only started but I could hear it at full speed at some point during the memory test. EDIT: I forgot to mention that at some point I ran sensors-detect, which suggested I added the modules fan and coretemp to /etc/modules, which I did. When I run lsmod, both modules always display 0 on the Used by column. Yesterday the system froze at 20:15, so today I checked /var/log/syslog and I found this: Mar 9 20:15:01 host CRON[1203]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) I searched for this and all I got was this post, but I cannot see it has any relation to my problem (I do have Apache installed but this is not a server, it's a laptop, and I don't run mysql here; also, the CPU meters don't go up, and reboot is not slower than it usually is). There are many other lines like this, but I cannot recall all of them having happened when the system froze; I'm sure not all of them did, because there are more log lines after some of them that indicate the machine was still running. The only other information I can gather is the following, also from /var/log/syslog: Mar 10 18:45:20 host sensors[600]: dell_smm-isa-0000 Mar 10 18:45:20 host sensors[600]: Adapter: ISA adapter Mar 10 18:45:20 host sensors[600]: Processor Fan: 0 RPM (min = 0 RPM, max = 6600 RPM) Mar 10 18:45:20 host sensors[600]: CPU: +39.0°C Mar 10 18:45:20 host sensors[600]: Ambient: +24.0°C Mar 10 18:45:20 host sensors[600]: SODIMM: +23.0°C Mar 10 18:45:20 host sensors[600]: Other: +24.0°C Mar 10 18:45:20 host sensors[600]: nvme-pci-3c00 Mar 10 18:45:20 host sensors[600]: Adapter: PCI adapter Mar 10 18:45:20 host sensors[600]: Composite: +23.9°C (low = -273.1°C, high = +84.8°C) Mar 10 18:45:20 host sensors[600]: (crit = +89.8°C) Mar 10 18:45:20 host sensors[600]: acpitz-acpi-0 Mar 10 18:45:20 host sensors[600]: Adapter: ACPI interface Mar 10 18:45:20 host sensors[600]: temp1: +25.0°C (crit = +107.0°C) Mar 10 18:45:20 host fancontrol[608]: Settings for hwmon6/pwm1: Mar 10 18:45:20 host fancontrol[608]: Depends on hwmon6/temp1_input Mar 10 18:45:20 host fancontrol[608]: Controls hwmon6/fan1_input Mar 10 18:45:20 host fancontrol[608]: MINTEMP=20 Mar 10 18:45:20 host fancontrol[608]: MAXTEMP=60 Mar 10 18:45:20 host fancontrol[608]: MINSTART=150 Mar 10 18:45:20 host fancontrol[608]: MINSTOP=100 Mar 10 18:45:20 host fancontrol[608]: MINPWM=0 Mar 10 18:45:20 host fancontrol[608]: MAXPWM=255 Mar 10 18:45:20 host fancontrol[608]: AVERAGE=1 Mar 10 18:45:20 host systemd[1]: Started fan speed regulator. Mar 10 18:45:20 host fancontrol[787]: Common settings: Mar 10 18:45:20 host fancontrol[787]: INTERVAL=10 Mar 10 18:45:20 host ModemManager[795]: <info> ModemManager (version 1.18.6) starting in system bus... Mar 10 18:45:20 host fancontrol[787]: Settings for hwmon6/pwm1: Mar 10 18:45:20 host fancontrol[787]: Depends on hwmon6/temp1_input Mar 10 18:45:20 host fancontrol[787]: Controls hwmon6/fan1_input Mar 10 18:45:20 host fancontrol[787]: MINTEMP=20 Mar 10 18:45:20 host fancontrol[787]: MAXTEMP=60 Mar 10 18:45:20 host fancontrol[787]: MINSTART=150 Mar 10 18:45:20 host fancontrol[787]: MINSTOP=100 Mar 10 18:45:20 host fancontrol[787]: MINPWM=0 Mar 10 18:45:20 host fancontrol[787]: MAXPWM=255 Mar 10 18:45:20 host fancontrol[787]: AVERAGE=1 The two blocks above are not consecutive, but it is the relevant information. Here are the contents of some files I deemed relevant: cat /sys/devices/platform/dell_smm_hwmon/driver_override (null) cat /sys/devices/platform/dell_smm_hwmon/uevent DRIVER=dell_smm_hwmon MODALIAS=platform:dell_smm_hwmon cat fancontrol # Configuration file generated by pwmconfig, changes will be lost INTERVAL=10 DEVPATH=hwmon6=devices/platform/dell_smm_hwmon DEVNAME=hwmon6=dell_smm FCTEMPS=hwmon6/pwm1=hwmon6/temp1_input FCFANS= hwmon6/pwm1=hwmon6/fan1_input MINTEMP=hwmon6/pwm1=20 MAXTEMP=hwmon6/pwm1=60 MINSTART=hwmon6/pwm1=150 MINSTOP=hwmon6/pwm1=100 The last one is already on the syslog block above, but I reproduce it here nonetheless. All solutions I've encountered to no fan on Linux suggest I install fancontrol and then run pwmconfig. The first time I tried I got an error telling me there was no /etc/fancontrol.conf file; I tried running this command while a USB fan was plugged and it worked. To be on the safe side, I just pressed Enter to generate the config file with the default parameters, but I still cannot hear the fans kicking in. As I said above, the sensors program tells me the speed changes every 2-4 seconds, but the fan is never audible and it doesn't stay on. The fan works on Windows (this laptop used to have it but I replaced the SSD with a new one, but kept and didn't format the old one), and as I said above, also in the onboard diagnostic tool. I've also run a Puppy Linux on a USB stick and it doesn't have this problem, although I didn't hear the fan working either. Is there a way to properly configure fancontrol to solve this? Are there any other options? I can very well use the laptop with a fan plugged in, but that's not the sort of solution I'm looking for. Thanks!
The solution in this case was to set modify /etc/default/grub so as to contain this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_osi=!Windows 2020". The acpi_osi parameter tells the kernel to treat ACPI events as if they were happening on the value OS.
Debian Testing freezing because of overheating, fan sensors give confusing info
1,329,337,905,000
I am running Debian 8 with the 3.16 kernel on an eeePC 1001P. I have a fair bit of Linux experience but unfortunately this one has me and my google-fu at a loss. Initially almost everything worked out of the box, except brightness control was random and my fan was always running. I tracked the brightness issue to the presence of acpi_video0 in /sys/class/backlight causing X11 to prefer that over intel_backlight (which actually contols my backlight properly). I solved this by editing my xorg.conf. Installing lm-sensors then showed only 2 sensors, both reading temperature. Booting with acpi_osi=Linux gains me fan control, and while I can still control my brightness via the slider in the Settings app, my brightness keys are dead.
I fixed this by installing acpid (sudo apt-get install acpid). I then created 2 files: /etc/acpi/events/asus-brightness: event=hotkey ASUS010:00 0000002[0-9a-f] action=/etc/acpi/brightness.sh %e /etc/acpi/brightness.sh: #!/bin/bash test -f /usr/share/acpi-support/key-constants || exit 0 export DISPLAY=:0 PREV=$(cat /etc/acpi/prevbrightness) if [[ "0x$3" -eq "0x20" || "0x$3" -lt "0x$PREV" ]] ; then xdotool key XF86MonBrightnessDown elif [[ "0x$3" -eq "0x2F" || "0x$3" -gt "0x$PREV" ]] ; then xdotool key XF86MonBrightnessUp else echo >&2 Unknown argument $1 fi echo $3 > /etc/acpi/prevbrightness Now, I ran echo 00000020 > /etc/acpi/prevbrightness as root (sudo su first) Brightness controls now work!
Booting with acpi_osi=Linux fixes fan control but breaks brightness keys
1,329,337,905,000
Recently I have installed openSUSE 12.3 on my Lenevo U410. I am using Windows on this machine too. But when I using openSUSE I realize that my laptop get much hotter than what is it in Windows. I also used Ubuntu before openSUSE. Ubuntu works fine, but now my fan works a little. Do you know how to solve this?
I have finally find the problem. The problem was from my discrete NVIDIA GPU. OpenSUSE 12.3 found it and all the drivers were good, but I do not know why its gets hot. I think the main problem is from Optimus technology (I have some display problems with it in Ubuntu 12.04 too!). Since I am not using my discrete GPU in Linux, therefore I have disabled it. There is two way to do this: 1.if there is no BIOS option to use UMA GPU only we must install bumblebee and required packages and then turn off discrete GPU. 2.if there is a BIOS option, we can disable Optimus. (there will be no problem with windows since it use UMA for all programs except programs you defined as "run with high-performance NVIDIA's GPU". So you can enable it from BIOS and come back to run them) first solution: do instructions from here second solution: go to your BIOS and disable optimus. Then come back to linux and delete all nvidia packages. Check your hardware and make sure there is no nvidia's GPU another point: you can install YsST power management package for better controlling over power savings and therefore fan and temp of your laptop. Instructions are here I hope someday linux will support optimus technology like windows!
My Laptop gets hot on OpenSUSE
1,329,337,905,000
I have a Lenovo Legion Y520 with these specs: zjeffer@ArchLinux ----------------- OS: Arch Linux x86_64 Host: 80WK Lenovo Y520-15IKBN Kernel: 5.1.7-arch1-1-ARCH Uptime: 42 mins Packages: 1659 (pacman) Shell: zsh 5.7.1 Resolution: 1920x1080, 1920x1080 WM: bspwm Theme: OSX-Arc-Plus [GTK2/3] Icons: Papirus-Light [GTK2/3] Terminal: gnome-terminal CPU: Intel i7-7700HQ (8) @ 3.800GHz GPU: NVIDIA GeForce GTX 1050 Mobile GPU: Intel HD Graphics 630 Memory: 1369MiB / 7866MiB I'm using thinkfan to try to control my cpu fan. Sadly, I can't see what my true fan speed is, as it always says 8 RPM. This is my thinkfan.conf, if it matters: ###################################################################### # thinkfan 0.7 example config file # ================================ # # ATTENTION: There is only very basic sanity checking on the configuration. # That means you can set your temperature limits as insane as you like. You # can do anything stupid, e.g. turn off your fan when your CPU reaches 70°C. # # That's why this program is called THINKfan: You gotta think for yourself. # ###################################################################### # # IBM/Lenovo Thinkpads (thinkpad_acpi, /proc/acpi/ibm) # ==================================================== # # IMPORTANT: # # To keep your HD from overheating, you have to specify a correction value for # the sensor that has the HD's temperature. You need to do this because # thinkfan uses only the highest temperature it can find in the system, and # that'll most likely never be your HD, as most HDs are already out of spec # when they reach 55 °C. # Correction values are applied from left to right in the same order as the # temperatures are read from the file. # # For example: # tp_thermal /proc/acpi/ibm/thermal (0, 0, 10) # will add a fixed value of 10 °C the 3rd value read from that file. Check out # http://www.thinkwiki.org/wiki/Thermal_Sensors to find out how much you may # want to add to certain temperatures. # Syntax: # (LEVEL, LOW, HIGH) # LEVEL is the fan level to use (0-7 with thinkpad_acpi) # LOW is the temperature at which to step down to the previous level # HIGH is the temperature at which to step up to the next level # All numbers are integers. # # I use this on my T61p: # tp_fan /proc/acpi/ibm/fan # tp_thermal /proc/acpi/ibm/thermal (0, 10, 15, 2, 10, 5, 0, 3, 0, 3) hwmon /sys/devices/platform/coretemp.0/hwmon/hwmon0/temp1_input hwmon /sys/devices/platform/coretemp.0/hwmon/hwmon0/temp2_input hwmon /sys/devices/platform/coretemp.0/hwmon/hwmon0/temp3_input hwmon /sys/devices/platform/coretemp.0/hwmon/hwmon0/temp4_input hwmon /sys/devices/platform/coretemp.0/hwmon/hwmon0/temp5_input hwmon /sys/devices/virtual/thermal/thermal_zone1/temp (0, 0, 51) (1, 50, 52) (2, 51, 55) (3, 54, 58) (4, 56, 63) (5, 60, 70) (6, 66, 79) (7, 74, 92) (127, 85, 32767) Here's the output of dmesg | grep -i thinkpad: [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-linux root=UUID=661a855a-c479-4291-bcb2-95b148ce2020 rw quiet nowatchdog nvidia-drm.modeset=1 thinkpad_acpi fan_control=1 [ 0.155975] Kernel command line: BOOT_IMAGE=/vmlinuz-linux root=UUID=661a855a-c479-4291-bcb2-95b148ce2020 rw quiet nowatchdog nvidia-drm.modeset=1 thinkpad_acpi fan_control=1 [ 4.231093] thinkpad_acpi: ThinkPad ACPI Extras v0.26 [ 4.231094] thinkpad_acpi: http://ibm-acpi.sf.net/ [ 4.231094] thinkpad_acpi: ThinkPad BIOS 4KCN40WW, EC unknown [ 4.231094] thinkpad_acpi: Lenovo Lenovo Y520-15IKBN, model 80WK [ 4.231554] thinkpad_acpi: Standard ACPI backlight interface available, not loading native one [ 4.231620] thinkpad_acpi: Console audio control enabled, mode: monitor (read only) [ 4.232877] thinkpad_acpi: battery 1 registered (start 0, stop 0) [ 4.232879] battery: new extension: ThinkPad Battery Extension [ 4.232896] input: ThinkPad Extra Buttons as /devices/platform/thinkpad_acpi/input/input8 Here it says EC unknown, so I have no idea which EC I have and can't find anything on the internet about my model. I checked this speed in s-tui, in /proc/acpi/ibm/fan and in sensors: it's always at 8 RPM, which is of course impossible if I can hear it blasting at full speed while playing games. In windows 10 I found that speedfan doesn't find any fans. I also updated my BIOS version from 4KCN40WW to 4KCN45WW. No changes. How can I see my true fan speed?
Partial answer: From your dmesg, thinkpad_acpi gets loaded. I had a quick look at the kernel source code, and there don't seem to be any fan related messages it outputs. However, some comments in the code say: ThinkPad EC register 0x84 (LSB), 0x85 (MSB): Main fan tachometer reading (in RPM) This register is present on all ThinkPads with a new-style EC, and it is known not to be present on the A21m/e, and T22, as there is something else in offset 0x84 according to the ACPI DSDT. Other ThinkPads from this same time period (and earlier) probably lack the tachometer as well. Unfortunately a lot of ThinkPads with new-style ECs but whose firmware was never fixed by IBM to report the EC firmware version string probably support the tachometer (like the early X models), so detecting it is quite hard. We need more data to know for sure. FIRMWARE BUG: always read 0x84 first, otherwise incorrect readings might result. FIRMWARE BUG: may go stale while the EC is switching to full speed mode. For firmware bugs, refer to: http://thinkwiki.org/wiki/Embedded_Controller_Firmware#Firmware_Issues EC is the embedded controller of your laptop. So there are three potential issues: On some thinkpads the tachometer isn't available at all, and on some thinkpads something else is in this location, and on some thinkpads the firmware is wrong. Which means you have to match up your Lenovo Legion Y520 with whatever version nomeclature you are using, and look for firmware bugs. In doubt I'd contact the maintainers of this module either via the kernel bugtracker and see if they have any ideas about your particular model.
Laptop fan always says it's running at 8 RPM
1,329,337,905,000
Previously, on Ubuntu 14.04.1 LTS, my computer's fans were always spinning just as much as they needed to. Today I switched to the base version of Debian 8.3.0 and now they are always running at 100% speed, even when the computer is completely idle. Looking at other, similar questions around the web, fancontrol should solve this, but it fails to start. $ sudo service fancontrol start Job for fancontrol.service failed. See 'systemctl status fancontrol.service' and 'journalctl -xn' for details. $ systemctl status fancontrol.service ● fancontrol.service - fan speed regulator Loaded: loaded (/lib/systemd/system/fancontrol.service; enabled) Active: failed (Result: exit-code) since Sat 2016-03-26 00:11:17 CET; 48s ago Docs: man:fancontrol(8) man:pwmconfig(8) Process: 4735 ExecStartPre=/usr/sbin/fancontrol --check (code=exited, status=1/FAILURE) I have configured lm-sensors with sensors-detect, this is the output: $ sudo sensors-detect # sensors-detect revision 6209 (2014-01-14 22:51:58 +0100) # System: MEDIONPC MS-7646 [1.0] This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... Success! (driver `k10temp') AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No AMD Family 16h power sensors... No Intel digital thermal sensor... No Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... Yes Found unknown chip with ID 0x0903 Some systems (mainly servers) implement IPMI, a set of common interfaces through which system health data may be retrieved, amongst other things. We first try to get the information from SMBIOS. If we don't find it there, we have to read from arbitrary I/O ports to probe for such interfaces. This is normally safe. Do you want to scan for IPMI interfaces? (YES/no): y Probing for `IPMI BMC KCS' at 0xca0... No Probing for `IPMI BMC SMIC' at 0xca8... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-piix4' for device 0000:00:14.0: ATI Technologies Inc SB600/SB700/SB800 SMBus Module i2c-dev loaded successfully. Next adapter: SMBus PIIX4 adapter at 0b00 (i2c-0) Do you want to scan it? (YES/no/selectively): y Client found at address 0x28 Probing for `National Semiconductor LM78'... No Probing for `National Semiconductor LM79'... No Probing for `National Semiconductor LM80'... No Probing for `National Semiconductor LM96080'... No Probing for `Winbond W83781D'... No Probing for `Winbond W83782D'... No Probing for `Winbond W83627HF'... No Probing for `Winbond W83627EHF'... No Probing for `Winbond W83627DHG/W83667HG/W83677HG'... No Probing for `Asus AS99127F (rev.1)'... No Probing for `Asus AS99127F (rev.2)'... No Probing for `Asus ASB100 Bach'... No Probing for `Analog Devices ADM1029'... No Probing for `ITE IT8712F'... No Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Probing for `EDID EEPROM'... No Client found at address 0x51 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: SMBus PIIX4 adapter at 0b20 (i2c-1) Do you want to scan it? (YES/no/selectively): y Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `k10temp' (autoloaded): * Chip `AMD Family 10h thermal sensors' (confidence: 9) No modules to load, skipping modules configuration. Unloading i2c-dev... OK Unloading cpuid... OK Running sensors gives me: $ sensors k10temp-pci-00c3 Adapter: PCI adapter temp1: +44.5°C (high = +70.0°C) Then there's also pwmconfig, which says: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed The PC was originally a Windows one made by Medion, the base board being the MS-7646, according to dmidecode. The content of /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor is ondemand. What can I do to not have the fans spin uncontrollably all the time, just like they did on Ubuntu?
Turns out that after the computer had cooled down completely over night, the fans weren't running that fast, initially at least. I checked sensors and it reported a temperature of 16°C (60°F), but as that quickly rose to about 42°C (108°F) the fans started spinning up again. Conclusions: There must already be some other fan controller active, perhaps BIOS-controlled. Either a) the reported temperature is wrong (unlikely, as then it would probably be wrong all the time, instead of increasing so quickly), or b) the computer does actually heat up that quickly, and Ubuntu did wrong with not firing up the fans enough, instead of Debian being wrong in doing the opposite. Opening the case, removing the fan and looking under there, this is what I found – confirming theory b: Believe it or not, removing all the dust helped tremendously. The average temperature is now slightly above 22°C (72°F). The main fan is still relatively loud, but apparently that's just because it's a cheap one. I uninstalled fancontrol again, as there's no sense in keeping it.
Fans always spinning at max. speed
1,329,337,905,000
I am trying to boot AntiX LiveCD on my old Toshiba A200 laptop with 2 GB of RAM. But when I accidentally launched the "sensors" in the terminal, I saw that the CPU temperature was 70°C! I turned off the laptop and turned it on again, and the fan revved up to the maximum. It turns out that antiX stops the fan? What should I do in this case?
Fan control, especially for old hardware like yours, is a quite obscure matter on Linux; there are multiple variables to take into account, e.g.: kernel version; BIOS version; BIOS settings; and their combination; personally I have never had a such problem, rather the opposite: fan running constantly at 100% with no reason... but in this case, it was "only" annoying; Coming back to your question: First, I power off the laptop because high temperature can damage hardware; Second, I do not rely on temperature measures, rather on acoustic noise, to prove fan running; I can hear the fan? Third, I search for information about hardware-specific (that's for your laptop model/manufacturer) fan control software; i.e. [https://sourceforge.net/projects/fnfx/]
AntiX linux disable laptop fan
1,329,337,905,000
I am on a custom board using an i.MX6. I am using Yocto (Pyro) to build my kernel (4.14.16). I am using the generic imx6qdl.dtsi device tree entry for PWM2 to drive the fan and it appears to work fine. The fan has a Tachometer input, which is connected to GPIO2_7. How do I read the fan speed? I have seen device tree blobs for cooling devices, but none of the examples seem to have a tachometer to monitor the fan's speed.
I was unable to find a device tree solution, but found enough code snippets to make an application to read it. Basically I just set up an interrupt on the GPIO and used clock_gettime to measure the period between edges. It requires a lot of filtering, but I am only using it to make sure the fan is running so that is fine.
How to read back a fan speed?
1,343,357,466,000
How can I change the volume name of a FAT32 filesystem?  I know I can set the volume name when I format the partition with the -n option of mkfs.vfat.  But how to just change the name without formatting? I especially want to be able to use lower and uppercase letters.  In worst case, I can use a Windows tool, but Windows by default transforms all letters to uppercase (but works fine with lowercase letters in volumes created with mkfs.vfat).
So far the only way I found to change a FAT volume name with lower case letters is to edit it with a hex editor (copy the first few sectors with dd to a temporary file, edit it and copy it back).  It works well so far (even with FAT16) and neither fsck nor CHKDSK from Windows 7 complained.  But no guarantee, of course ;-)
How can I change the volume name of a FAT32 filesystem?
1,343,357,466,000
Is there a way to create a FAT32 filesystem containing a set of files, without needing to mount it or have root access? I am developing a software application for an old operating system as a hobby, and as part of the build process I would like to package up some source files into a FAT32 disk image, then launch QEMU to boot the image and run an old compiler in it. Afterwards I would like to extract the compiled file out of the FAT32 disk image. I can create the filesystem with mkfs.vfat, however the only way I know of to get files into and out of the image is to mount it, which typically requires root access and is not conducive to being embedded in a build process. Ideally I am after something like the zip and unzip utilities, only instead of creating/extracting .zip files, it would create and extract disk images in FAT16 or FAT32 format. Does anything like this exist? The only things I can find online all involve mounting the disk image.
Of course despite all my unsuccessful searching, I finally find the answer only moments after posting a question about it. So the mtools package can do it like this: # Create a 2 MB file dd if=/dev/zero of=disk.img bs=1M count=2 # Put a FAT filesystem on it (use -F for FAT32, otherwise it's automatic) mformat -i disk.img :: # Add a file to it mcopy -i disk.img example.txt :: # List files mdir -i disk.img :: # Extract a file mcopy -i disk.img ::/example.txt extracted.txt mtools works by specifying drive letters (like C:), with the special : drive (specified as ::) referring to the image given on the command line with the -i option.
Create and populate FAT32 filesystem without mounting it
1,343,357,466,000
When you upgrade or reinstall a package with dpkg (and ultimately anything that uses it, like apt-get etc) it backs up the existing files by creating a hard link to the file before replacing it. That way if the unpack fails it can easily put back the existing files. That's great, since it protects the operating system from Bad Things™ happening. Except... it only works if your filesystem supports hard links. Not all filesystems do - such as FAT filesystems. I am working on a distribution of Debian for a specific embedded ARM platform, and the boot environment requires that certain files (the kernel included) are on a FAT filesystem so the boot code is able to locate and load them. When you go to upgrade the kernel package (or any other package that has files in that FAT partition) the install fails with: dpkg: error processing archive linux-image3.18.11+_3.18.11.2.armadillian_armhf.deb (--install): unable to make backup link of `./boot/vmlinuz-3.18.11+' before installing new version: Operation not permitted And the whole upgrade fails. I have scoured the web, and the only references I can find are specific people with specific problems when doing specific upgrades, the answer to which is usually "Delete /boot/vmlinuz-3.18.11+ and try again", and yes, that fixes that specific problem. But that's not the answer for me. I am an OS distributor, not an OS user, so I need a way to fix this that doesn't involve the end user manually deleting their kernel files before doing an upgrade. I need a way to tell dpkg to "copy, not hard link" for files on /boot (or all files for all I care, though that would slow down the upgrade operation somewhat), or better yet "If a hard link fails, don't complain, just copy it instead". I have tried such things as the --force-unsafe-io and even --force-all flags to dpkg, but nothing has any effect.
The behaviour you're seeing is implemented in archives.c in the dpkg source, line 1030 (for version 1.18.1): debug(dbg_eachfiledetail, "tarobject nondirectory, 'link' backup"); if (link(fnamevb.buf,fnametmpvb.buf)) ohshite(_("unable to make backup link of '%.255s' before installing new version"), ti->name); It seems to me that you could handle the link failure by falling back to the rename behaviour used lines 1003 and following; something like (this is untested): debug(dbg_eachfiledetail, "tarobject nondirectory, 'link' backup"); if (link(fnamevb.buf,fnametmpvb.buf)) { debug(dbg_eachfiledetail,"link failed, nonatomic"); nifd->namenode->flags |= fnnf_no_atomic_overwrite; if (rename(fnamevb.buf,fnametmpvb.buf)) ohshite(_("unable to move aside '%.255s' to install new version"), ti->name); } I'm not a dpkg expert though... (And there's no option already available in dpkg to provide this behaviour.)
dpkg replacing files on a FAT filesystem
1,343,357,466,000
I have an application which will search for a corrupted FAT file system and repair it. For testing the application I will need a corrupted file system. What is a good and reproducible way for corrupting a FAT file system? Creating bad sectors for example.
a partial solution dd if=/dev/zero count=100 bs=1k of=fs.fat mkfs -t vfat fs.fat mount fs.fat /mnt ## as root # cp some file umount /mnt ## as root cp fs.fat fs.ref vi fs.ref ## change some bytes cp fs.ref fs.sampleX now you have a good fs (fs.fat) and a corrupted one (fs.ref) sudo mount -t vfat fs.ref /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so you can try to fix fs.sampleX knowing a bit about fat (or filesystem layout) might help to "cleverly corrupt" fs.ref this can be applied to any fs type (extX, xfs, ... )
create a corrupted FAT file system
1,343,357,466,000
I'm wondering if this is considered safe. I know the file handles work just fine as long as a link remains, and I know the identifier is the inode rather than the name, but I am not sure how it works across different FS. For example copying from an ext4 harddrive to a NTFS USB stick, or copying from a FAT stick to an ext4 drive. I was just copying over a bunch of large media files, and renamed them before the copy was done. The checksums match. I wonder if it is always safe, will it work in the opposite direction, are there quirks I should know about or reasons to avoid doing this? The OS/Distro is Ubuntu with the 5.0.0-15 Linux kernel.
I am not sure how it works across different FS. The rename operation itself doesn’t operate across different file systems; there is no difference between writing to a file from say a text editor and writing to a file using cp with a source file on another file system. On Linux, the rename system call is transparent to other links to the file, which include other hard links and open file descriptions (and descriptors). The manpage explicitly states Open file descriptors for oldpath are also unaffected. (I’m qualifying with “on Linux” only because I couldn’t find a reference in POSIX; I think this is common across POSIX-style operating systems.) So when you’re copying a file across file systems, cp opens the source for reading, the target for writing, and starts copying. Rename operations don’t affect the file descriptors it’s using; you can rename the source and/or the target without affecting cp. Another way to think of this is that the file’s name in its containing directory is part of its directory entry, which points at its inode; open file descriptions are other pointers to the inode, as are other hard links. Changing the file name doesn’t affect any other existing pointers. The caveats to watch out for are that tools such as mv don’t limit themselves to what the rename system call can do; if you mv files across file systems, the rename will fail (or mv will figure out that the operation is across file systems and won’t even attempt it), and mv will then resort to manually copying the file contents and deleting the original. This won’t give good results if the file being renamed is being changed simultaneously.
Renaming a file while it is being written
1,343,357,466,000
I need a way to change the creation time of a file on a mounted FAT32 volume. I have to do that because my MP3 player will only read files sorted by this creation time. If I can find a way to set the file creation time (like touch can do with modification / access time) of a file, a trivial script will allow MP3 files to be read in the right order (as expected, alphabetically). But I've yet to find a solution, and my searches have been in vain. I hope you guys can help me !
I finally ended up using fatsort, which does the job nicely, and it's also a lot quicker than copying the files over and over.
Change file creation time on a FAT filesystem
1,343,357,466,000
I'm trying to understand which FAT based filesystems my Real Time 2.6 Linux supports. I have tried 3 things: /proc/filesystems shows vfat among others non-relevant for the question (like ext2, etc) /proc/config.gz shows: # DOS/FAT/NT Filesystems # CONFIG_FAT_FS=y CONFIG_MSDOS_FS=y CONFIG_VFAT_FS=y CONFIG_FAT_DEFAULT_CODEPAGE=437 CONFIG_FAT_DEFAULT_IOCHARSET="ascii" # CONFIG_NTFS_FS is not set Commands like ls /lib/modules/$(uname -r)/kernel/fs show nothing as .../fs folder doesn't exist. So, looking at this, is safe to asume that FAT and VFAT are supported, but what about FAT32 or exFAT? It's not explicitly specified. How can I know?
The FAT drivers include support for FAT32; it’s treated as a variant along with FAT12 and FAT16. If you see vfat in /proc/filesystems, then FAT32 is supported. exFAT is supported, in recent kernels, by a specific exFAT driver, with its own configuration option (EXFAT_FS). It’s listed separately in /proc/filesystems. exFAT support is also available as a FUSE exFAT driver.
Understanding Linux FAT fs (FAT, VFAT, FAT32, exFAT) support
1,343,357,466,000
I mounted a FAT32 drive onto my Linux computer using the following terminal command: > sudo mount /dev/sdb1 /media/exampleFolderName -o dmask=000, fmask=111 I did this so I could share / edit the files over a network connection. Unfortunately Linux doesn't support per file permissions in FAT32 format, so this sets the entire drive in the right permissions whilst it's connected. If I understand mount correctly, I'll have to do this every time I plug the drive in, which I don't want to do. I've heard about: /etc/fstab So my question - how do I turn the above mount command into an fstab entry? If anyone could also explain what dmask and fmask mean, that would be appreciated.
You probably want to add a line like /dev/sdb1 /media/drive1 vfat dmask=000,fmask=0111,user 0 0 to /etc/fstab. The additional ,user in the options field allows any user to mount this filesystem, not just root.
Linux, fat32 and etc/fstab
1,343,357,466,000
Can somebody show me how to make Gentoo mount my USB? This is what I got when trying mount /dev/sdb1 /mnt: mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg says: FAT: codepage cp437 not found
You need to set codepage and charset in kernel options: make menuconfig -> File systems: -> Native language support: <*> Codepage 437 (United States, Canada) <*> NLS ISO 8859-1 (Latin 1; Western European Languages) -> DOS/FAT/NT Filesystems (437) Default codepage for FAT (iso8859-1) Default iocharset for FAT and then recompile kernel...
Mount USB (FAT) in Gentoo
1,343,357,466,000
I've noticed that when I mount a FAT filesystem on Linux, all of the files have their executable permissions set. Why is this? There's almost no chance that you can or want to directly execute any program found on a FAT file system, and having the executable bit implicitly set for all files seems annoying to me. I understand that FAT (and other filesystems as well) have no mode bits, and so the 777 mode I'm seeing on files is just simulated by the filesystem driver under Unix. My question is why 777 instead of 666?
FAT may not be a POSIX-style filesystem, that doesn't mean that you shouldn't be allowed to store executables on it and run them directly from it. Because FAT doesn't store POSIX permissions, the only way this can happen (easily) is if the default mode used for files allows their execution... In the past, when (V)FAT was still used as the main filesystem for other operating systems (DOS and Windows), and hard drives were smaller, it wasn't unusual to store Unix/Linux binaries on a FAT filesystem. (There's even a FAT variant which stores POSIX attributes in special files, so you could run Linux on a FAT filesystem.) Nowadays you can still end up doing so -- on USB keys for example. If you're worried about the security implications, there are a number of options you can use. noexec and nodev are probably already set for removable filesystems on your distribution; dmask and fmask allow you to specifically determine the modes used. showexec will only set the executable bits on files with .bat, .com or .exe extensions. (Note that a file's permissions and the ability to execute it are separate...)
Why does Unix set the executable flag for FAT file systems? [closed]
1,343,357,466,000
I am experiencing the same problem described here: Fail to boot: Codepage not found. My error is: FAT-fs (sdx1): codepage cp437 not found My fstab mount command for the device is: LABEL=ESP /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2 The above is automatically generated by a script and it hasn't changed recently. The problem started recently. I have already run mkinitcpio -p linux and it completes as expected without any errors. Other systems that are configured identically (afaics) do not have this issue. I have checked the wiki as suggested at the comment by Gilles on the other question, but I don't find the specific problem.
I'm running Arch Linux. This problem can generally be resolved by including vfat in the modules list in /etc/mkinitcpio.conf. Here's an example: MODULES=(nvidia vfat) However, another way this same error message can occur is if you boot Arch with a kernel version that does not exactly match the version of the libraries on your system. That's how I encountered it. I resolved it simply by booting with the correct kernel version.
failed to mount fat filesystem: codepage cp437 not found
1,343,357,466,000
Out of curiosity, is this possible nowadays? I remember some old Slackware versions did support FAT root partition but I am not sure if this is possible with modern kernels and if there are any distros offering such an option. I am interested in pure DOS FAT (without long names support), VFAT 16/32 and exFAT. PS: Don't tell me I shouldn't, I am not going to use this in production unless necessary :-)
OK, I tried it. First two problems from the beginning: NO support for hard and symbolic links. It means that I had to copy each file, duplicating it and wasting space. Second problem: no special file support at all. This means things like /dev/console are unavailable at boot time to init before even /dev is remounted as tmpfs. Third problem: you will loose permissions enforcing. But out of this, there were no issues. My own system was booted successfully on a vfat volume. Normally I would not do that, too.
Can I install GNU/Linux on a FAT drive?
1,343,357,466,000
If I put a USB memory stick (FAT formatted) into a Windows PC, then unplug it without "ejecting" it, then put it in again, Windows is okay with that, without giving any warning about it "possibly having problems". But if I do the same with Linux (e.g. Ubuntu 15.04), then after inserting it the second time, I get warning messages in the log like: FAT-fs (sdf1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. And if I subsequently put it in a Windows PC, I get a message popping up prompting me to check it for errors. Why is Linux handling of the FAT "dirty" flag so basic? I would have thought it would be better to only set the "dirty" flag if it really is potentially "dirty" -- e.g. something like: Set the "dirty" flag when a file is written. Clear the "dirty" flag when: all written files are closed, and/or the write cache is written out to disk, and of course, when the disk is unmounted. It would be nice if there was at least some mount option to operate in that mode, to reduce the chance of users getting "dirty" flag false alarms for merely plugging and unplugging a removable device, even though nothing was actually written.
My C is rough but it looks like the fat dirty bit is set when the superblock is read (this commit explains why it was implemented the way it was). Windows may choose to not set the bit until after something is changed in the FS where linux takes a bit more paranoid approach of setting it upon mount. To me it seems like it's more efficient this way, to assume dirty after mount and unset upon clean unmount, otherwise the kernel would have to track the dirty/clean state of the volume and check if it needs to be set on each write. As you can see in the code and the commit comment; you can mount RO and the bit will not be changed.
Why does Linux mark FAT as 'dirty' simply due to mounting it?
1,343,357,466,000
E.g: I need to know when was the last time a pendrive was mounted. Where could I see that? The pendrive has e.g.: FAT32, EXT3 filesystem.
ext3 stores the last mount time and can be retrieved with: dumpe2fs -h /dev/node I'm not sure that FAT stores this information.
Where can I see the last mount time?
1,343,357,466,000
Question: How does one define the size of a FAT16 / FAT 32 file system with mkdosfs in Linux I want to create a, say 2 GB FAT16|32 filesystem on a partition in the size of 8 GB or more? This is how far I got: created a partition with fdisk mkdosfs -F 16 /dev/sdb1 creates a FAT16 file system over the WHOLE partition – as long as this isn't larger tha 4 GB, I know mkdosfs -F 32 /dev/sdb1 creates a FAT32 filesystem over the WHOLE partition. I know that this is the default and that I wouldn't neet to specify -F 32 but for the sake of completeness and style… according to man mkdosfs the size of the file system is to be defined as the last argument HOW?! So far all of my attempts to define the size returned error messages. All of them. Guessing that I just ran into a massice misunderstanding and being frustrated about not being able to solve such a simple question all by myself I really wonder where the heck did I miss something in defining the size?!
This worked for me: # truncate -s 8G foobar # losetup -f --show foobar /dev/loop0 # mkdosfs -F 32 /dev/loop0 $((2*1024*1024)) mkfs.fat 3.0.25 (2014-01-17) Warning: block count mismatch: found 8388608 but assuming 2097152. Loop device does not match a floppy size, using default hd params # mount /dev/loop0 /mnt/tmp # df -h /mnt/tmp Filesystem Size Used Avail Use% Mounted on /dev/loop0 2.0G 4.0K 2.0G 1% /mnt/tmp An alternative approach, in case the mkfs does not have an option to limit the size, is to create a loop device with limit: # losetup -f --show --sizelimit $((2*1024*1024*1024)) /dev/loop0 /dev/loop1 # mkdosfs -F 32 /dev/loop1 mkfs.fat 3.0.25 (2014-01-17) Loop device does not match a floppy size, using default hd params # mount /dev/loop0 /mnt/tmp # df -h /mnt/tmp Filesystem Size Used Avail Use% Mounted on /dev/loop0 2.0G 4.0K 2.0G 1% /mnt/tmp Of course, if it's a partition, you could also shrink the partition, then enlarge it again.
mkfsdos: define the size of FAT16|32 file system on USB pendrive in Linux
1,343,357,466,000
I have a bash script which copies data across to a USB stick. It works. The data is copied across fine, but the filenames are always changed. They are the same as they were before, but any longer names are cut to only 8 chars long, and have an extension that is only 3 chars long (11 char max total). So an original file called "willGetCutShorter.html" becomes "willGetS.htm" on the drive, whereas "small.txt" stays the same. Copied directory names are cut in the same way, all appearing 8 chars long (they have no extension, of course). I don't want this to happen. I want the file and directory names to not be modified at all. I don't know why this is happening either. In my bash script, I copy everything in my computer directory to the drive using an asterisk to represent all the computer directory contents. I'm wondering if this is why? Perhaps cp is only grabbing part of the filename? Also, while in Linux the files appear all in lowercase, even stuff that was originally part uppercase. In Windows however, all files and folders are uppercase. Why? EDIT #1 I formatted the USB drive on a Windows 7 machine before I started using it in this way. In my /etc/fstab file I have added an entry for the drive that mounts it as msdos. This was because I read a manual page (probably 'man mount') and it said that the drive format, which is FAT32, is covered by msdos. I wanted to mount it with FAT32 as the filesystem type, but I couldn't see that option in the mount manual page. The fstab entry means the mount command consists of only this: sudo mount /mnt/ The copy commands in the bash script are all like this: cp -f -r /path/to/dir/* /mnt/to/dir/ It sounds like I just need to reformat the drive to something else, or mount it slightly differently?
I suspect you are using a mount command like the one below: mount -t msdos /dev/XYZ /mnt/test This will force the partition to be mounted in legacy DOS FAT filesystem which uses the 8.3 filename convention (See https://en.wikipedia.org/wiki/8.3_filename) instead of vfat which uses Long filenames (https://en.wikipedia.org/wiki/Long_filename). Recommend using either of the below options for mount: mount /dev/XYZ /mnt/test (by default uses vfat, if it is FATXX formatted USB stick) or mount -t vfat /dev/XYZ /mnt/test (explicitly mount as vfat, if it is FATXX formatted USB stick)
Copying to USB drive modifies filename
1,343,357,466,000
I have several audio devices (car radio, portable radio, MP3 player) that take SD cards and USB sticks with a FAT file system on it. Because these devices have limited intelligence they do not sort filenames on the FAT FS by name but merely play them in the order in which they have been copied to the SD card. In MS DOS and MS Windows this was not a problem; using a simple utility that sorted files alphabetically and then copied them across in that order did the trick. However, on Linux the files copied from the ext4 file system do not end up on the FAT FS in the same order as in which they were read and copied across, presumably because there is a buffering mechanism in the way which improves efficiency but does not worry too much about the physical order in which the files end up on the target device. I have also tried to use Windows in a Virtual Box VM but still the files end up being written in a different order than the one they were read from the Linux file system. Is there a way (short of copying them across manually one by one and waiting for all write buffers to be flushed) to ensure that files end up on the FAT SD target in the order in which they were read from the ext4 file system?
I remember asking this a long time ago (you are welcome to search for it). My guess at this long future time is: mount the device with option sync (removes the buffering), sort the list to ensure that they are copied in order.
File order on FAT/FAT32/VFAT file systems
1,343,357,466,000
What's the easiest way to rename (change the volume label of) a fat16 volume (e.g. on a USB drive) from linux? It seems like mlabel from the mtools package is meant to do this, but the documentation is not geared to rapid assimilation.
Try sudo mlabel -i <device> ::<label>, for example sudo mlabel -i /dev/sdb1 ::new_label. Reference: RenameUSBDrive on the Ubuntu community documentation.
renaming a fat16 volume
1,343,357,466,000
I'm struggling to create a FAT-formatted a disk image that can store a file of known size. In this case, a 1 GiB file. For example: # Create a file that's 1 GiB in size. dd if=/dev/zero iflag=count_bytes of=./large-file bs=1M count=1G # Measure file size in KiB. LARGE_FILE_SIZE_KIB="$(du --summarize --block-size=1024 large-file | cut --fields 1)" # Create a FAT-formatted disk image. mkfs.vfat -vv -C ./disk.img "${LARGE_FILE_SIZE_KIB}" # Mount disk image using a loopback device. mount -o loop ./disk.img /mnt # Copy the large file to the disk image. cp --archive ./large-file /mnt The script fails with the following output: ++ dd if=/dev/zero iflag=count_bytes of=./large-file bs=1M count=1G 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 39.6962 s, 27.0 MB/s +++ du --summarize --block-size=1024 large-file +++ cut --fields 1 ++ LARGE_FILE_SIZE_KIB=1048580 ++ mkfs.vfat -vv -C ./disk.img 1048580 mkfs.fat 4.2 (2021-01-31) Auto-selecting FAT32 for large filesystem Boot jump code is eb 58 Using 32 reserved sectors Trying with 8 sectors/cluster: Trying FAT32: #clu=261627, fatlen=2048, maxclu=262144, limit=65525/268435446 Using sector 6 as backup boot sector (0 = none) ./disk.img has 64 heads and 63 sectors per track, hidden sectors 0x0000; logical sector size is 512, using 0xf8 media descriptor, with 2097144 sectors; drive number 0x80; filesystem has 2 32-bit FATs and 8 sectors per cluster. FAT size is 2048 sectors, and provides 261627 clusters. There are 32 reserved sectors. Volume ID is f0de10c3, no volume label. ++ mount -o loop ./disk.img /mnt ++ cp --archive ./large-file /mnt cp: error writing '/mnt/large-file': No space left on device How do I create a FAT-formatted disk image that's large enough store a file of known size? Resources: https://linux.die.net/man/1/dd https://linux.die.net/man/8/mkfs.vfat https://linux.die.net/man/8/mount https://linux.die.net/man/1/cp https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system#Size_limits EDIT 1 My assumption was that mkfs.vfat -C ./disk.img N would create an image that has N KiBs of usable space, but I guess that's not the case. EDIT 2 It seems like a dead end to try to calculate exactly how big the disk image needs to be to store a file of known size because of the complexities around FAT sector/cluster size limits. As suggested in the answers, I've settled for adding 20% extra space to the disk image to allow for FAT overhead.
It would seem to me you're trying to fit a file on a file system that doesn't have enough space – by design! You're basically saying "For a file that takes N kB, make a disk image exactly N kB in size". Where exactly is FAT going to store the file metadata, directory tables and volume descriptor? With FAT32, the standard duplicate superblock, and the usual directory table + long file name table, plus 32 reserved sectors somewhere on the disk, from the top of my head, you'll want a couple of MB extra space, around 4 MB, I guess, for file systems this huge¹. You're also using mkfs.vfat with defaults, which was a sensible configuration for PCs made in the 80s and early 1990s, which means there's smaller sectors, and hence more sectors to keep track of, and hence more consumption by the FAT to store a single file. Maximize the sector size; on a file system with more than 1GB of available space (so, disk image significantly larger than 1GB!), 16 kB should work (minimum "FAT32-legal" cluster count is 65525, divide 1GB by that assuming 4kB sectors,so -S 4096, and maximize the cluster size to 16 sectors, so -s 16). Also note: with 1 GB you're already dangerously close to the maximum file size of FAT32 – 2 GB. So, if this is intended to be some storage for e.g. backup or file system images, you might quickly find yourself in a situation where FAT32 doesn't suffice. It's a very old filesystem. Addendum: How to figure how much larger to make the image than the content As sketched above, FAT is a bit annoying, because it has arbitrary (and surprisingly low!) limits on the number of sectors, which makes it a bit hard to predict how much overhead you'll incur. However, the good thing is that Linux supports sparse files, meaning that you can make an "empty" image that doesn't "cost" any storage space. That image can be larger than you need it to be! You then fill it with the data you need, and shrink it back to only the size you want. Generally, your script from your question does a few questionable things, and there's more sensible ways to achieve the same; I'll comment in code on what my commands are equivalent to. The idea is simply: make a much larger-than-necessary, but "free" in terms of storage, image file first, fill it with the file or the files you need, check how much free space you have left. Subtract that space from your image size, make a new image, be done. # File(s) we want to store files=( *.data ) # whatever you want to store. imgfile="fat32.img" # First, figure out how much size we'll need # use `stat` to get the size in bytes instead of parsing `du`'s output # Replace the new line after each file size with a "+" and add the initial overhead. initial_overhead=$(( 5 * 2**20 )) # 5 MiB # Then use your shell's arithmetic evaluation $(( … )) to execute that sum totalsize=$(( $(stat -c '%s' -- "${files[@]}" | tr '\n' '+') initial_overhead )) # give an extra 20% (no floating point math in bash…), then round to 1 kiB blocks img_size=$(( ( totalsize * 120 / 100 + 1023 ) / 1024 * 1024 )) # Create a file of that size fallocate -l ${img_size} -- "${img_file}" mkfs.vfat -vv -- "${img_file}" # set up loopback device as regular user, and extract loopback # device name from result loopback=$(udisksctl loop-setup - "${img_file}" | sed 's/.* \([^ ]*\)\.$/\1/') # mount loopback device as regular user, and get mount path mounted=$(udisksctl mount -b "${loopback}" | sed 's/^.* \([^ ]*\)$/\1/') # make sure we're good so far [[ -d "${mounted}" ]] || (echo "couldn't get mount"; exit -1) # copy over files… cp -- "${files[@]}" "${mounted}" # … and unmount our file system image udisksctl unmount -b "${loopback}" udisksctl loop-delete -b "${loopback}" # use df to directly get the amount of free space in kilobyte blocks free_space=$(df --direct --block-size=1K --output=avail -- "${img_file}" | tail -n1) # We no longer need our temporary image rm -- "${img_file}" # subtract 2 kB just to be on the safe side when making new image new_img_size=$(( free_space - 2 )) # Make a new image, copy over files fallocate -l ${new_img_size} -- "${img_file}" mkfs.vfat -vv -- "${img_file}" loopback=$(udisksctl loop-setup - "${img_file}" | sed 's/.* \([^ ]*\)\.$/\1/') mounted=$(udisksctl mount -b "${loopback}" | sed 's/^.* \([^ ]*\)$/\1/') [[ -d "${mounted}" ]] || (echo "final copy: couldn't get mount"; exit -1) cp -- "${files[@]}" "${mounted}" udisksctl unmount -b "${loopback}" udisksctl loop-delete -b "${loopback}" # Done! ¹: when FAT32 was introduced, a 1 GB hard drive was still not a small one, and file system structures hail down from FAT12, in 1981, and designed for 360 kB sized floppy disks; the amount of blocks you could possibly have to keep account for that 1 GB hard drive would have was simply not to materialize for another 15 years or so. In essence, smart phones formatting SD cards to FAT32 carry around a time capsule for a file system invented in ~ 1997, which in itself is a relatively slight modification of a file system invented in 1980; so, yay, solving modern storage problems with compromise solutions from 44 years ago.
Create FAT-formatted disk image that can fit 1G file
1,343,357,466,000
I have a flash drive formatted with a FAT32 partition. When I pull it out before unmounting it naturally has the dirty bit set and when I use the flash in a Windows machine, Windows complains that the drive should be repaired. The Linux machine is an embedded device and has no "unmount" in its GUI. But I have SSH access to this machine and I tried to use the command below to clear the dirty bit: root@system:~# fsck.fat -aw /dev/sda1 fsck.fat 4.1 (2017-01-24) 0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. Automatically removing dirty bit. Performing changes. /dev/sda1: 4 files, 4/261376 clusters Then I remove the drive (still no unmount) and when I plug it back in a Windows system it still shows the drive should repaired message. So the question is, why fsck does not actually clear the dirty bit? Is there any way to prevent or clear the drty bit so plugging out the drive without a proper unmount doesnt trigger the dirty bit? The reason: I want to have a script or service to perform fsck to clear dirty bit as soon as a drive is mounted. I mean I want the device not to set the dirty bit at all or clear it as soon as a drive is inserted. Because the user has no way of asking the system to perform the unmount
There was a commit to dosfstools to fix the dirty bit correctly in jan 2021. This became release 4.2, and it seems you have 4.1.
fsck does not repair FAT32 partition correctly
1,343,357,466,000
Just encountered a problem: when rebooting a Linux system, timestamps of all files in the mounted VFAT filesystem are shown in the incorrect timezone. It seems like a device starts thinking that its local time is in UTC, so it displays all timestamps with the shift. Steps to reproduce: Create some small FAT-formatted image: dd if=/dev/zero of=small.img bs=1M seek=1 count=0 mkfs.vfat small.img Mount this image locally: mount -t vfat -o umask=0022,gid=1001,uid=1001 small.img mnt Set the timezone to some non-UTC one; Create a file in the mounted filesystem (ie. touch mnt/newfile) Observe the file modification/change timestamps: they are correct, concerning the currently set one: stat mnt/newfile File: mnt/newfile Size: 0 Blocks: 0 IO Block: 16384 regular empty file Device: 700h/1792d Inode: 40 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2021-03-22 12:19:56.000000000 +0100 Modify: 2021-03-22 12:19:56.000000000 +0100 Change: 2021-03-22 12:19:56.000000000 +0100 Birth: - timedatectl Local time: Mon 2021-03-22 12:19:07 CET Universal time: Mon 2021-03-22 11:19:07 UTC RTC time: Mon 2021-03-22 11:19:07 Time zone: Europe/Vienna (CET, +0100) System clock synchronized: yes NTP service: active RTC in local TZ: no Unmount the filesystem, to check if anything has been changed with the remount: umount mnt; mount -t vfat -o umask=0022,gid=1001,uid=1001 small.img mnt; stat mnt/newfile File: mnt/newfile Size: 0 Blocks: 0 IO Block: 16384 regular empty file Device: 700h/1792d Inode: 64 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2021-03-22 00:00:00.000000000 +0100 Modify: 2021-03-22 12:19:56.000000000 +0100 Change: 2021-03-22 12:19:56.000000000 +0100 Birth: - Reboot the system; Mount the image once more, have a look at the created file's timestamps: File: mnt/newfile Size: 0 Blocks: 0 IO Block: 16384 regular empty file Device: 700h/1792d Inode: 26 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2021-03-22 01:00:00.000000000 +0100 Modify: 2021-03-22 13:19:56.000000000 +0100 Change: 2021-03-22 13:19:56.000000000 +0100 Birth: - It can be clearly observable, that the time is shifted forward by 1 hour (12:10 to 13:19), while the timezone is shown the same - +0100. Looks like mount now thinks that the file timestamps were recorded in UTC, so it tries to display them with the "correct" shift. To check the validity of the previous statement, let's remount the same filesystem with the tz=UTC option explicitly: mount -t vfat -o umask=0022,gid=1001,uid=1001,tz=UTC small.img mnt; stat mnt/newfile File: mnt/newfile Size: 0 Blocks: 0 IO Block: 16384 regular empty file Device: 700h/1792d Inode: 50 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2021-03-22 01:00:00.000000000 +0100 Modify: 2021-03-22 13:19:56.000000000 +0100 Change: 2021-03-22 13:19:56.000000000 +0100 Birth: - Even though the system's timezone is CET indeed: date Mon Mar 22 12:26:42 CET 2021 P.S. https://stackoverflow.com/questions/10068855/how-do-i-get-the-correct-modified-datetime-of-a-fat32-file-regardless-of-timezo is not an answer to this question, since I can't get from it, why is this change seen right after reboot of the machine and not after a remount? If vfat stores timestamps in local time, why does mount after rebooting assumes that the timestamps are in UTC rather than local time?
Seems like the problem resides in the Linux kernel itself, as the timezone may (and usually do) differ between the kernel and the userspace. time.c file in the kernel/time in the Linux kernel source tree holds (and exports) the struct timezone sys_tz, which is then used in fs/fat/misc.c in the FAT time <-> UNIX time conventions. The tz_minuteswest field of this struct is used to show the difference between the current timezone and the UTC, and it is taken into consideration if the tz=UTC option is not passed to the mount.vfat command. However, the aforementioned field is by default set to 0, As explained here, Under Linux, there are some peculiar "warp clock" semantics associated with the settimeofday() system call if on the very first call (after booting) that has a non-NULL tz argument, the tv argument is NULL and the tz_minuteswest field is nonzero. (The tz_dsttime field should be zero for this case.) In such a case it is assumed that the CMOS clock is on local time, and that it has to be incremented by this amount to get UTC system time. No doubt it is a bad idea to use this feature. So the only way to have the kernel (and its drivers) always see the correct timezone, is to call the settimeofday() with tz argument, where tz.tz_minuteswest is the needed time offset "to the west" in hours, relative to the UTC (ie. -60 for CET, etc.), and tz_dsttime set to 0, after each system boot. This can be achieved by setting (by any means) the system timezone to the current timezone after changing it to some another one (ie.UTC), since the command line tools s.a. timedatectl usually do not perform the actual timezone change, if the desired timezone equals to the current one. The following code was created to prove this concept: #include <sys/time.h> #include <stdio.h> int main() { struct timeval tv; struct timezone tz; int ret = gettimeofday(&tv, &tz); printf("%d, %dr\n", tz.tz_minuteswest, tz.tz_dsttime); return ret; } This code execution was as follows: $gcc testtz.cpp -o testtz $./testtz minuteswest: 0, dsttime: 0r $timedatectl set-timezone Europe/Vienna $./testtz minuteswest: 0, dsttime: 0r $timedatectl set-timezone UTC $timedatectl set-timezone Europe/Vienna $./testtz minuteswest: -60, dsttime: 0r This problem won't go anytime soon, even though the usage of the timezone structure is considered obsolete. So for my problem, I'll consider the usage of the time_offset=minutes option of the mount.vfat.
VFAT, Linux: Invalid file timestamps shown after a reboot
1,343,357,466,000
How to run Linux script e.g configure etc, in FAT (or alike raw) filesystem ? $ sudo chmod -R 777 . $ ./configure bash: ./configure: Permission denied How to solve ?
Assuming it is a shell script, manually passing it to the relevant binary should let you run it: $ head -n 1 configure #!/bin/bash $ bash configure FAT does not support individual file permissions. So you cannot assign one with chmod. Though, it should be possible to configure to treat all files in a FAT as executable.
How to run Linux script in FAT (it is not working like on Linux' FS)
1,343,357,466,000
I trying to figure out what the following Mountoption for (v)FAT exactly does (in Linux): allow_utime=### -- This option controls the permission check of mtime/atime. 20 - If current process is in group of file's group ID, you can change timestamp. 2 - Other users can change timestamp. The default is set from `dmask' option. (If the directory is writable, utime(2) is also allowed. I.e. ~dmask & 022) Normally utime(2) checks current process is owner of the file, or it has CAP_FOWNER capability. But FAT filesystem doesn't have uid/gid on disk, so normal check is too unflexible. With this option you can relax it. [source] Question: What does this (above) mean? Trying to look it up I endend with the C code which doesn't help me a lot, so neither this nor man 2 utime (as mentioned) help me much at the moment. I'd love to use the source… From utime: The utime() system call changes the access and modification times of the inode specified by filename to the actime and modtime fields of times respectively. I read this as: Enable to change timestamps. Super Extra Kudos for you who can give an actual example of how to use this Mountoption (allow_utim)
On a filesystem that supports normal Unix file attributes, each file has a user who is designated as owner. Only the owner of a file may change its timestamps with utime. Other users aren't allowed to change timestamps, even if they have write permission. FAT filesystems don't record anything like an owner. The FAT filesystem driver pretends that a particular user is the owner of every file: either the user doing the mounting or the user given by the uid parameter. Using the normal rules, only that user is allowed to change timestamps. Files also have an owning group, determined by the gid parameter. FAT filesystem don't record Unix file permissions, so the driver makes them up. It assigns permissions based on the umask, fmask and dmask parameters, so all directories and all regular files have the same permissions. When users other than the owner have write access to the filesystem, it would make sense that they'd be allowed not only to modify regular files and directories, but also file metadata. The main metadata of interest on a FAT filesystem is the timestamps on files. Normally, only the owning user can modify timestamps. By passing the allow_utime mount option, you can allow other users to change timestamps as well. For example, to allow the group foo to modify anything in the filesystem, and allow others to read but not write, you would pass the parameters gid=foo,umask=002,allow_utime=20 (this is actually the default value for allow_utime based on the umask).
FAT Mountoption allow_utime explained
1,343,357,466,000
I am trying to get the map of empty space on any partition in a filesystem-agnostic way. To do this I create a file that uses all of the empty space, then use the 'filefrag -e' command (e2fsprogs v1.42.9) to create a map of the space (on Ubuntu 14.04 Trusty, tested with kernels 3.16.0-67 and 4.1.20-040120, dosfstools v3.0.26-1). This works for most filesystems, but for FAT filesystems specifically I am getting physical offsets beyond the size of the partition. Note the problem has now changed, please see the edit below. $ dd if=/dev/zero of=temp.img bs=512 count=2048000 $ sudo losetup /dev/loop1 ./temp.img $ sudo parted /dev/loop1 mklabel msdos $ sudo parted /dev/loop1 mkpart primary fat32 2048s 1026047s $ sudo blockdev --rereadpt /dev/loop1 $ sudo mkfs -t vfat /dev/loop1p1 $ sudo mount /dev/loop1p1 ./mnt $ sudo cp somefile1 ./mnt $ sudo cp somefile2 ./mnt $ df -B 512 ./mnt Filesystem 512B-blocks Used Available Use% Mounted on /dev/loop1p1 1023440 21232 1002208 3% ./mnt $ sudo dd if=/dev/zero of=./mnt/emptyspace.zeros bs=512 count=1002208 $ df -B 512 ./mnt Filesystem 512B-blocks Used Available Use% Mounted on /dev/loop1p1 1023440 1023440 0 100% ./mnt $ sudo filefrag -b512 -e ./mnt/emptyspace.zeros Filesystem type is: 4d44 File size of ./mnt/emptyspace.zeros is 513130496 (1002208 blocks of 512 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 1002207: 348688.. 1350895: 1002208: 1350880: merged,eof ./mnt/emptyspace.zeros: 1 extent found $ cat /proc/mounts /dev/loop1p1 .../mnt vfat rw,relatime,fmask=0022,dmask=0022,codepage=437, iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0 $ sudo umount /dev/loop1p1 $ sudo fsck /dev/loop1p1 fsck from util-linux 2.20.1 fsck.fat 3.0.26 (2014-03-07) /dev/loop1p1: 4 files, 63965/63965 clusters $ echo $? 0 (filefrag returns physical offsets relative to the start of the partition) $ cat /sys/class/block/loop1p1/start 2048 $ cat /sys/class/block/loop1p1/size 1024000 (sysfs start & size are in 512 byte sectors) Clearly 1350895 is larger than 1024000. Is this a bug in the Linux vfat/fat implementation of the FIBMAP ioctl or is there another reason for this? I note EmmaV posts a comment alluding to this problem in this question but there wasn't a definitive answer. I have also been in touch with Theodore Ts'o (author of filefrag) and he has not indicated a known issue with filefrag. EDIT: Further to this I have found the above problem is caused by a bug in e2fsprogs v1.42.9. A fix for this is available here which is first included in e2fsprogs v1.42.12. I have upgraded and tested and the output is very different. However, I am still getting a problem with FAT filesystems. The offset is now inside the partition at least, but comparing the content of a file with the blocks returned by filefrag yields a difference. I have written a python script here for testing. I would be grateful for any feedback and suggestions on what the problem is. Bonus points goes to the person that can tell me the problem with mkfs for btrfs! :)
I have been in touch with OGAWA Hirofumi and Theodore Ts'o and tested various kernels and e2fsprogs tags. The remaining problem is fixed in e2fsprogs v1.43-WIP from 2015 onwards. I believe this commit fixed the issue. Full testing history and test script can be found here. The moral of the story: don't bother using filefrag for FAT filesystems unless it says 1.43-WIP and 2015+ at the bottom of the man page. I should also mention that hdparm --fibmap also has a buggy implementation in v9.43. You'll need at least v9.45 but I haven't thoroughly validated hdparm like filefrag.
filefrag fibmap returning wrong physical offset for FAT
1,343,357,466,000
So, I created a FAT16 partition in the following way I plugged in my 16 GB thumbdrive. dd if=/dev/zero of=/dev/sdX count=1 Opened up cfdisk for simplicity. Selected dos label type Created new partition of type "FAT16 <32M" Written the changes to the partition. mkfs -t vfat /dev/sdXY I am surprised to see that it really worked well! So well that I have more than a GiB of data into it right now. My question is how does FAT16 is able to copy that much data? Is there any chance of losing the data?
Since you didn’t specify a FAT size with mkfs’ -F option, it chose the appropriate size for your partition’s size (in your case, FAT32). mkfs.vfat doesn’t care what partition type you selected in fdisk.
How FAT16 worked on a 16 GB thumbdrive flawlessly?
1,343,357,466,000
I was under the impression that the only character not allowed in file names was /, but I don't seem to be able to create a file whose name contains characters such as *, \, ", :, |, < or > either. For instance, $ echo "*" > * $ touch "*" > touch: setting times of `*': No such file or directory I'm writing a shell script that requires the user to type a file name and I would like to make sure the name doesn't have any invalid characters. Is there a list somewhere? Update: I didn't realise I was doing this in a FAT32 filesystem (USB stick), which does not allow `*' and other characters in file names. No wonder it didn't work. Sorry, my bad!!
I'm pretty sure the only reserved bytes are 0 (ASCII Nul) and 0x2f (ASCII '/', forward slash). You can easily make a file with '.', '\' and other funky things in it. Think "unicode file names", which contain all kinds of weird byte values. Naturally, you can't have duplicate file names in the same directory, so files named "." and ".." can't usually be made by a mere user: the filesystem part of the kernel creates them when a directory gets created. Also note that most filesystems have a limit on the length of a file name, and you can't create a file whose name is the empty string.
Reserved characters in file names
1,343,357,466,000
When playing with filesystems and partition, I realized that when I created a ext file system on my USB drive and plug it to Windows, I am forced to format it. On the other end, when building a FAT partition on Windows, and plugging it to my virtual machine, Linux is perfectly able to read and mount my FAT partition. 1 - Why can't Windows read Linux filesystems? 2 - What's the key difference that allows Linux to do it, yet Windows can't?
Windows can’t read “Linux” file systems (such as Ext4 or XFS) by default because it doesn’t ship with drivers for them. You can install software such as Ext2fsd to gain read access to Ext2/3/4 file systems. Linux can access FAT file systems because the kernel has a FAT file system driver, and most distributions enable it by default. There are cases where Linux distributions won’t be able to access a Windows-formatted USB key by default: large keys are typically formatted using ExFAT, and the Linux kernel doesn’t support that. You would have to install a separate ExFAT driver in this situation. There’s nothing inherent in Windows or Linux which limits their ability to support file systems; it’s really down to the availability of drivers. Linux supports Windows file systems because they are very popular; this then provides a common basis for file exchange, meaning that there is less need for Windows to support Linux file systems.
Why can't Windows read Linux filesystems? [closed]
1,343,357,466,000
So I would like to be able run Linux an a Fat32(preferably ExFat) partition is this possible? It appears this has been done Can I install GNU/Linux on a FAT drive? Couldn't shorcuts be used in place of hard and soft symbolic links? Watching random videos on youtube it appears ExFat is faster then NTFS in most cases https://www.youtube.com/watch?v=fc98Vgc25hM which would make me believe it would be a good candidate for a file system.
It's technically possible. The posixovl filesystem allows storing files on a FAT filesystem, with extra metadata stored in additional files to implement things that FAT doesn't provide: file names containing characters that FAT forbids or that are too long, additional metadata such as permissions and ownership, other file types such as symbolic links and devices, etc. That doesn't mean that it's a good idea, though. It would be difficult to set up (I don't know of any distribution that sets it up for you) and slow. Shortcuts could in theory be read as symbolic links, but this would have several downsides. Someone would need to write a filesystem driver that stores symbolic links as shortcuts. Windows might mess up symbolic links when it edits shortcuts (shortcuts are only very vaguely like symbolic links: symbolic links point to a file path, whereas Windows shortcuts track a file and Windows modifies the shortcut if the target file is moved). Linux would have no way to tell whether a file that looks like a shortcut is in fact intended to be a symbolic link or a regular file. There used to be a way to install Linux on a disk image which is stored as a single file on a Windows system, called Wubi. It has been abandoned. It works, but it too has a number of downsides: lower performance, high risk of losing data if the system crashes, etc. The normal way to install Linux is the best way: let the installer create a Linux partition. If you really, really don't want to create a Linux partition (for example because your corporate IT management forbids it), run Linux in a virtual machine. With Windows 10, you can run many Linux applications through the Windows Subsystem for Linux; you can get a whole Ubuntu userspace that way.
Is it possible to run basic Linux on file permissionless file system(ex. Fat32)
1,343,357,466,000
I am connecting a 4-bay HDD hub (FANTEC QB-35US3-6G) via USB to my Raspberry Pi. I have two disks inside the hub and formated them as FAT. The formating I did on a Mac because I was not able to see the unformated disks in the hub with blkid when connected to the Raspberry, which is strange. When sudo blkid I see /dev/sdc1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="e36842bb-f2a9-4a3e-99b6-bbd4a54f39f6" /dev/sdc2: LABEL_FATBOOT="WD3" LABEL="WD3" UUID="4568-1704" TYPE="vfat" PARTUUID="576db57a-0543-4f9b-b3e4-4cf452cbdda3" /dev/sdd1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="c2a64dbc-5b9a-458e-b0a8-04d6f5fd8956" /dev/sdd2: LABEL_FATBOOT="WD1" LABEL="WD1" UUID="D719-1706" TYPE="vfat" PARTUUID="2cced532-4870-43f1-8226-4f413e513f33" fdisk -l shows GPT PMBR size mismatch (4294967294 != 5860533167) will be corrected by write. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: EFRX-68AX9N0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: BFC5ECE6-8901-4C6C-A2BA-C14DA6AD5890 Device Start End Sectors Size Type /dev/sdc1 40 409639 409600 200M EFI System /dev/sdc2 411648 5860532223 5860120576 2.7T Microsoft basic data Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: 01FALS-40Y6A0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 9B3E10E4-6E9B-4CE0-A7EF-691A6EA14CC5 Device Start End Sectors Size Type /dev/sdd1 40 409639 409600 200M EFI System /dev/sdd2 411648 1953523711 1953112064 931.3G Microsoft basic data Is this a special thing related to the USB hub? Or is it normal that a FAT partition will also create an EFI System device? EDIT: Funny, since I just wanted to and did nothing else then to format the disks: $ sudo mount /dev/sdc2 /mnt $ ll /mnt total 132 drwxr-xr-x 4 pi pi 32768 Jan 1 1970 ./ drwxr-xr-x 21 root root 4096 Jul 10 02:41 ../ -rw-r--r-- 1 pi pi 4096 Oct 12 2019 ._.com.apple.timemachine.donotpresent -rw-r--r-- 1 pi pi 0 Oct 12 2019 .com.apple.timemachine.donotpresent drwxr-xr-x 2 pi pi 32768 Oct 12 2019 .fseventsd/ drwxr-xr-x 4 pi pi 32768 Oct 12 2019 .Spotlight-V100/
You have two GPT formatted "disks". Both have a 200 MB EFI system partition. sdc has "PMBR size mismatch", meaning protective MBR. In other words maybe a mess...but the way you tell: with external multi-disk from a different system. ADDED: I also don't like Start=40. I have 2048. So I have the first MB (?) "out of harms way" (harm=some MBR sector write). But it does say "size mismatch" and "will be corrected". see comments for how we found the answers. And bottom WHY? for the big Q. I leave it like this. Thanks! "The formatting I did" -- No joke and less insult: are you sure what you and macos exactly did? WHY does mac do that? It is a very good idea to reserve a 200MB (or a bit more even) partition in case you want to make the disk EFI bootable later. (initrd plus kernel can be 50 MB together as files)
Why am I seeing an EFI partition when I created a FAT on my external USB hub?
1,343,357,466,000
How does a Linux OS format an SD card and magically fix everything? I have an STM32 running FreeRTOS and FAT-FS. When I have a corrupted SD card and FAT-FS can't do anything about it, I format the SD card through Linux and everything starts working again. How does Linux format an SD card? FAT-FS says there is a physical error (driver level error, so basically the uC inside the SD is not responding with what we expect).
I have an old phone. If I let it write the SD card, sometimes it writes bad sectors. I suspect this happens when the battery is low, and because the phone fails to satisfy the standard electrical requirements of the SD card. On a modern block device, a bad logical block might repeatedly fail to read (checksum mismatch), but be "repaired" if you successfully write new contents to it. When my phone corrupts my sd card, all I need to do is reformat the card. I don't even need to rewrite all the blocks (sometimes called "full format"). During & after a reformat, the filesystem will never read a block which it has not already written (there is never any reason for it to do so). There are some risks of this approach. It's possible that your device is actually permanently damaged, and that problems will recur some time after formatting. If this is a concern, the safest approach is to test the device or partition somehow before formatting it. (Historically you were supposed to use badblocks, but I'm not sure how nice it is nowadays). If one of the data blocks of a file is bad, you might be able to recover by deleting or overwriting the file. The problem is when you have a bad block in the filesystem's internal structures. Typically filesystems do not include any code that would reset these to a default intial state. There is far too high a risk of silently losing data. Therefore the error will persist. Some filesystem checkers might ask if you want to reset the bad block though. (Sidenote: with Linux fsck.vfat specifically, I've had filesystems that it just gives up and says it hasn't implemented a specific type of repair. I suspect Windows version of that is a bit more comprehensive.) Some filesystems might support restoring certain structures using a redundant copy, instead of having to reset them. FAT filesystems tend to be run with two redundant FATs, which can be used for recovery e.g. by fsck.vfat on Linux. Ext4 tends to keep a large number of redundant "superblocks". I understand filesystems like btrfs and ZFS can be configured to keep redundant copies of all metadata on separate devices, and be repaired while they are still running. [expanded from this comment: Defining a state of a failed SD cards by kernel tracing? ]
How does unix / linux format an SD card?
1,343,357,466,000
I have ArchLiinux Linux comp001 3.18.7-1-ARCH #1 PREEMPT Wed Feb 11 11:38:34 MST 2015 armv6l GNU/Linux for Arm installed on rPi and here is my /etc/fstab file: # # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> /dev/mmcblk0p1 /boot vfat defaults 0 0 /dev/mmcblk0p3 /mnt/data vfat noexec,rw,noatime,user,umask=022 0 2 Partition /dev/mmcblkop3 (microsd card fat32 partition) is mounted on mnt/data with rw options, but if I list /mnt directory, I get: total 20 4 drwxr-xr-x 3 root root 4096 Sep 18 13:27 . 4 drwxr-xr-x 18 root root 4096 Jan 9 11:08 .. 12 drwxr-xr-x 3 root root 12288 Jan 1 1970 data Why there is not write permission bit set on data?
You are confusing the rw option with the umask. The rw option merely dictates that the partition is not mounted read-only. The umask option dictates what permission that not set on files and directories. Your current umask of 022 sets the permission bits to 755 which translates to rwxr-xr-x. Change the umask to 000, which should give you 777 or rwxrwxrwx permissions. More info on umask is available on Wikipedia
/etc/fstab/ rw option is being ignored for mircosd card partition in ArchLinux
1,294,409,653,000
While browsing through the Kernel Makefiles, I found these terms. So I would like to know what is the difference between vmlinux, vmlinuz, vmlinux.bin, zimage & bzimage?
vmlinux This is the Linux kernel in an statically linked executable file format. Generally, you don't have to worry about this file, it's just a intermediate step in the boot procedure. The raw vmlinux file may be useful for debugging purposes. vmlinux.bin The same as vmlinux, but in a bootable raw binary file format. All symbols and relocation information is discarded. Generated from vmlinux by objcopy -O binary vmlinux vmlinux.bin. vmlinuz The vmlinux file usually gets compressed with zlib. Since 2.6.30 LZMA and bzip2 are also available. By adding further boot and decompression capabilities to vmlinuz, the image can be used to boot a system with the vmlinux kernel. The compression of vmlinux can occur with zImage or bzImage. The function decompress_kernel() handles the decompression of vmlinuz at bootup, a message indicates this: Decompressing Linux... done Booting the kernel. zImage (make zImage) This is the old format for small kernels (compressed, below 512KB). At boot, this image gets loaded low in memory (the first 640KB of the RAM). bzImage (make bzImage) The big zImage (this has nothing to do with bzip2), was created while the kernel grew and handles bigger images (compressed, over 512KB). The image gets loaded high in memory (above 1MB RAM). As today's kernels are way over 512KB, this is usually the preferred way. An inspection on Ubuntu 10.10 shows: ls -lh /boot/vmlinuz-$(uname -r) -rw-r--r-- 1 root root 4.1M 2010-11-24 12:21 /boot/vmlinuz-2.6.35-23-generic file /boot/vmlinuz-$(uname -r) /boot/vmlinuz-2.6.35-23-generic: Linux kernel x86 boot executable bzImage, version 2.6.35-23-generic (buildd@rosea, RO-rootFS, root_dev 0x6801, swap_dev 0x4, Normal VGA
What is the difference between the following kernel Makefile terms: vmLinux, vmlinuz, vmlinux.bin, zimage & bzimage?
1,294,409,653,000
Someone sent me a ZIP file containing files with Hebrew names (and created on Windows, not sure with which tool). I use LXDE on Debian Stretch. The Gnome archive manager manages to unzip the file, but the Hebrew characters are garbled. I think I'm getting UTF-8 octets extended into Unicode characters, e.g. I have a file whose name has four characters and a .doc suffic, and the characters are: 0x008E 0x0087 0x008E 0x0085 . Using the command-line unzip utility is even worse - it refuses to decompress altogether, complaining about an "Invalid or incomplete multibyte or wide character". So, my questions are: Is there another decompression utility that will decompress my files with the correct names? Is there something wrong with the way the file was compressed, or is it just an incompatibility of ZIP implementations? Or even misfeature/bug of the Linux ZIP utilities? What can I do to get the correct filenames after having decompressed using the garbled ones?
It sounds like the filenames are encoded in one of Windows' proprietary codepages (CP862, 1255, etc). Is there another decompression utility that will decompress my files with the correct names? I'm not aware of a zip utility that supports these code pages natively. 7z has some understanding of encodings, but I believe it has to be an encoding your system knows about more generally (you pick it by setting the LANG environment variable) and Windows codepages likely aren't among those. unzip -UU should work from the command line to create files with the correct bytes in their names (by disabling all Unicode support). That is probably the effect you got from GNOME's tool already. The encoding won't be right either way, but we can fix that below. Is there something wrong with the way the file was compressed, or is it just an incompatibility of ZIP implementations? Or even misfeature/bug of the Linux ZIP utilities? The file you've been given was not created portably. That's not necessarily wrong for an internal use where the encoding is fixed and known in advance, although the format specification says that names are supposed to be either UTF-8 or cp437 and yours are neither. Even between Windows machines, using different codepages doesn't work out well, but non-Windows machines have no concept of those code pages to begin with. Most tools UTF-8 encode their filenames (which still isn't always enough to avoid problems). What can I do to get the correct filenames after having decompressed using the garbled ones? If you can identify the encoding of the filenames, you can convert the bytes in the existing names into UTF-8 and move the existing files to the right name. The convmv tool essentially wraps up that process into a single command: convmv -f cp862 -t utf8 -r . will try to convert everything inside . from cp862 to UTF-8. Alternatively, you can use iconv and find to move everything to their correct names. Something like: find -mindepth 1 -exec sh -c 'mv "$1" "$(echo "$1" | iconv -f cp862 -t utf8)"' sh {} \; will find all the files underneath the current directory and try to convert the names into UTF-8. In either case, you can experiment with different encodings and try to find one that makes sense. After you've fixed the encoding for you, if you want to send these files back in the other direction it's possible you'll have the same problem on the other end. In that case, you can reverse the process before zipping the files up with -UU, since it's likely to be very hard to fix on the Windows end.
How can I correctly decompress a ZIP archive of files with Hebrew names?
1,294,409,653,000
I'm trying to send a bug report for the app file, /usr/bin/file But consulting the man and sending an email BUGS Please report bugs and send patches to the bug tracker at http://bugs.gw.com/ or the mailing list at ⟨[email protected]⟩ (visit http://mx.gw.com/mailman/listinfo/file first to subscribe). Made me find out the mail address does not exist. Is there another way of communicating to the community? Hopefully this here question is already part of it :) So here's my email: possible feature failure: the --extension option doesn't seem to output anything $ file --extension "ab.gif" ab.gif: ??? It would be useful to easily be able to use the output of this to rename a file to its correct extension. something like file --likely_extension would only output the likely detected extension or an error if the detection was too low like thus: $ file --likely_extension "ab.gif" gif better though would be a --correct_extension option: $ file --correct_extension "ab.jpg" $ ls ab.gif Tyvm for this app :)
You are following the correct procedure to file an issue or enhancement request: if a program’s documentation mentions how to do so, follow those instructions. Unfortunately it often happens that projects die, or that the instructions in the version you have are no longer accurate. In these cases, things become a little harder. One possible general approach is to file a bug with your distribution; success there can be rather hit-or-miss though... (I should mention that it’s usually better to report a bug to the distribution you got your package from, if you’re using a package; this is especially true if the packaged version is older than the current “upstream” version, and if you haven’t checked whether the issue is still present there.) For file specifically, the official documentation has been updated to mention that the bug tracker and mailing list are down, and it also provides a direct email address for the current maintainer, which you could use to contact him.
What's the correct way to ask for a feature in GNU Linux?
1,294,409,653,000
I have found the term "LSB executable" or "LSB shared object" in the output of the file command in Linux. For example: $ file /bin/ls /bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=4637713da6cd9aa30d1528471c930f88a39045ff, stripped What does "LSB" mean in this context?
“LSB” here stands for “least-significant byte” (first), as opposed to “MSB”, “most-significant byte”. It means that the binary is little-endian. file determines this from the sixth byte of the ELF header.
What does "LSB" mean when referring to executable files in the output of /bin/file?
1,294,409,653,000
I need to recognize type of data contained in random files. I am new to Linux. I am planning to use the file command to understand what type of data a file has. I tried that command and got the output below. Someone suggested to me that the file command looks at the initial bytes of a file to determine data type. The file command doesn't look at a file extension at all. Is that correct? I looked at the man page but felt that it was too technical. I would appreciate if anyone can provide a link which has much simpler explanation regarding how the file command works. What are different possible answers that I could get after running the file command? For example, in the transcript below I get JPEG, ISO media, ASCII, etc: The screen output is as follows m7% file date-file.csv date-file.csv: ASCII text, with CRLF line terminators m7% file image-file.JPG image-file.JPG: JPEG image data, EXIF standard m7% file music-file.m4a music-file.m4a: ISO Media, MPEG v4 system, iTunes AAC-LC m7% file numbers-file.txt numbers-file.txt: ASCII text m7% file pdf-file.pdf pdf-file.pdf: PDF document, version 1.4 m7% file text-file.txt text-file.txt: ASCII text m7% file video-file.MOV video-file.MOV: data Update 1 Thanks for answers and they clarified a couple of things for me. So if I understand correctly folder /usr/share/mime/magic has a database that will give me what are the current possible file formats (outputs that I can get when I type file command and follow it by a file). is that correct? Is it true that whenever 'File' command output contains the word "text" it refers to something that you can read with a text viewer, and anything without "text" is some kind of binary?
file uses several kinds of test: 1: If file does not exist, cannot be read, or its file status could not be determined, the output shall indicate that the file was processed, but that its type could not be determined. This will be output like cannot open file: No such file or directory. 2: If the file is not a regular file, its file type shall be identified. The file types directory, FIFO, socket, block special, and character special shall be identified as such. Other implementation-defined file types may also be identified. If file is a symbolic link, by default the link shall be resolved and file shall test the type of file referenced by the symbolic link. (See the -h and -i options below.) This will be output like .: directory and /dev/sda: block special. Much of the format for this and the previous point is partially defined by POSIX - you can rely on certain strings being in the output. 3: If the length of file is zero, it shall be identified as an empty file. This is foo: empty. 4: The file utility shall examine an initial segment of file and shall make a guess at identifying its contents based on position-sensitive tests. (The answer is not guaranteed to be correct; see the -d, -M, and -m options below.) 5: The file utility shall examine file and make a guess at identifying its contents based on context-sensitive default system tests. (The answer is not guaranteed to be correct.) These two use magic number identification and are the most interesting part of the command. A magic number is a special sequence of bytes that's in a known place in a file that identifies its type. Traditionally that place is the first two bytes, but the term has been extended further to include longer strings and other locations. See this other question for more detail about magic numbers in the file command. The file command has a database of these numbers and what type they correspond to; that database is usually in /usr/share/mime/magic, and maps file contents to MIME types. The output there (often part of file -i if you don't get it by default) will be a defined media type or an extension. "Context-sensitive tests" use the same sort of approach, but are a bit fuzzier. None of these are guaranteed to be right, but they're intended to be good guesses. file also has a database mapping those types to names, by which it will know that a file it has identified as application/pdf can be described as a PDF document. Those human-readable names may be localised to another language too. These will always be some high-level description of the file type in a way a person will understand, rather than a machine. The majority of different outputs you can get will come from these stages. You can look at the magic file for a list of supported types and how they're identified - my system knows 376 different types. The names given and the types supported are determined by your system packaging and configuration, and so your system may support more or fewer than mine, but there are generally a lot of them. libmagic also includes additional hard-coded tests in it. 6: The file shall be identified as a data file. This is foo: data, when it failed to figure out anything at all about the file. There are also other little tags that can appear. An executable (+x) file will include "executable" in the output, usually comma-separated. The file implementation may also know extra things about some file formats to be able to describe additional points about them, as in your "PDF document, version 1.4".
Linux file command classifying files
1,294,409,653,000
I was reading about the file command and I came across something I don't quite understand: file is designed to determine the kind of file being queried.... file accomplishes this by performing three sets of tests on the file in question: filesystem tests, magic tests, language tests What are magic tests?
That refers to the "magic bytes" which many file formats have at the beginning of a file which show what kind of file this is. E.g. if a file starts with #! then it is considered a script.
What does “magic tests” mean for the file command?
1,294,409,653,000
Which format (Mac or DOS) should I use on Linux PCs/Clusters? I know the difference: DOS format uses "carriage return" (CR or \r) then "line feed" (LF or \n). Mac format uses "carriage return" (CR or \r) Unix uses "line feed" (LF or \n) I also know how to select the option: AltM for Mac format AltD for DOS format But there is no UNIX format. Then save the file with Enter.
Use neither: enter a filename and press Enter, and the file will be saved with the default Unix line-endings (which is what you want on Linux). If nano tells you it’s going to use DOS or Mac format (which happens if it loaded a file in DOS or Mac format), i.e. you see File Name to Write [DOS Format]: or File Name to Write [Mac Format]: press AltD or AltM respectively to deselect DOS or Mac format, which effectively selects the default Unix format.
GNU nano 2: DOS Format or Mac Format on Linux
1,294,409,653,000
How can we read Microsoft Word (.doc) files in a Linux system? It doesn't support .doc files. I tried strings filename.doc | less but it gives ugly output. Any other option? I would prefer a GUI based tool.
If you want a graphical solution, then you might be able to open them with Open Office or Libre Office. There's also antiword Antiword is a free MS Word reader for Linux and RISC OS. There are ports to FreeBSD, BeOS, OS/2, Mac OS X, Amiga, VMS, NetWare, Plan9, EPOC, Zaurus PDA, MorphOS, Tru64/OSF, Minix, Solaris and DOS. Antiword converts the binary files from Word 2, 6, 7, 97, 2000, 2002 and 2003 to plain text and to PostScript. catdoc - Catdoc is a MS Word file decoding tool that doesn't attempt to analyze file formatting (it just extracts readable text), but is able to handle all versions of Word and convert character encodings. And a couple of other options mentioned here (linux.com).
How to read Word .doc files?
1,294,409,653,000
I know that in ~/.bashrc one must not put spaces around = signs in assignment: $ tail -n2 ~/.bashrc alias a="echo 'You hit a!'" alias b = "echo 'You hit b!'" $ a You hit a! $ b b: command not found I'm reviewing the MySQL config file /etc/my.cnf and I've found this: tmpdir=/mnt/ramdisk key_buffer_size = 1024M innodb_buffer_pool_size = 512M query_cache_size=16M How might I verify that the spaces around the = signs are not a problem? Note that this question is not specific to the /etc/my.cnf file, but rather to *NIX config files in general. My first inclination is to RTFM but in fact man mysql makes no mention of the issue and if I need to go hunting online for each case, I'll never get anywhere. Is there any convention or easy way to check? As can be seen, multiple people have edited this file (different conventions for = signs) and I can neither force them all to use no spaces, nor can I go crazy checking everything that may have been configured and may or may not be correct. EDIT: My intention is to ensure that currently-configured files are done properly. When configuring files myself, I go with the convention of whatever the package maintainer put in there.
I'll answer that in a more general way - looking a bit at the whole "Unix learning experience". In your example you use two tools, and see the language is similar. It just unclear when to use what exactly. Of course you can expect there is a clear structure, so you ask us to explain that. The case with the space around = is only and example - there are lot's of similar-but-bot-quite cases. There has to be a logic in it, right?! The rules how to write code for some tool, shell, database etc only depend on what this particular tool requires. That means that the tools are completely independent, technically. The logical relation that I think you expect simply does not exist. The obvious similarity of the languages you are seeing are not part of the programm implementation. The similarity exist because developers had agreed how to do it when they wrote it down for a particular program. But humans can agree only partially. The relation you are seeing is a cultural thing - it's neither part of the implementation, nor in the definition of the language. So, now that we have handeled the theory, what to do in practise? A big step is to accept that the consistency you expected does not exist - which is much easier when understanding the reasons - I hope the theory part helps with this. If you have two tools, that do not use the same configuration language (eg. both bash scripting), knowing the details of the syntax of one does not help much with understanding the other; So, indeed, you will have to look up details independently. Make sure you know where you find the reference documentation for each. On the positive side, there is some consistency where you did not expect it: in the context of a single tool (or different tools using the same language), you can be fairly sure the syntax is consistent. In your mysql example, that means you can assume that all lines have the same rule. So the rule is "space before and after = is not relevant". There are wide differences in how hard it is to learn or use the configuration- or scripting language of a tool. It can be some like "List foo values in cmd-foo.conf, one per line.". It can be a full scripting language that is used elsewhere too. Then you have a powerful tool to write configuration - and in some cases that's just nice, in others you will really need that. Complex tools, or large famillies of related tools sometimes just use very complex special configuration file syntax - (some famous examples are sendmail and vim). Others use a general scripting language as base, and extend that language to support the special needs, some times in complex ways, as the language allows. That would be a very specific case of a domain-specific language (DSL).
When are spaces around the = sign forbidden?
1,294,409,653,000
I have a gzip archive with trailing data. If I unpack it using gzip -d it tells me: "decompression OK, trailing garbage ignored" (same goes for gzip -t which can be used as a method of detecting that there is such data). Now I would like to get to know this garbage, but strangely enough I couldn't find any way to extract it. gzip -l --verbose tells me that the "compressed" size of the archive is the size of the file (i.e. with the trailing data), that's wrong and not helpful. file is also of no help, so what can I do?
Figured out now how to get the trailing data. I created Perl script which creates a file with the trailing data, it's heavily based on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=604617#10: #!/usr/bin/perl use strict; use warnings; use IO::Uncompress::Gunzip qw(:all); use IO::File; unshift(@ARGV, '-') unless -t STDIN; my $input_file_name = shift; my $output_file_name = shift; if (! defined $input_file_name) { die <<END; Usage: $0 ( GZIP_FILE | - ) [OUTPUT_FILE] ... | $0 [OUTPUT_FILE] Extracts the trailing data of a gzip archive. Outputs to stdout if no OUTPUT_FILE is given. - as input file file causes it to read from stdin. Examples: $0 archive.tgz trailing.bin cat archive.tgz | $0 END } my $in = new IO::File "<$input_file_name" or die "Couldn't open gzip file.\n"; gunzip $in => "/dev/null", TrailingData => my $trailing; undef $in; if (! defined $output_file_name) { print $trailing; } else { open(my $fh, ">", $output_file_name) or die "Couldn't open output file.\n"; print $fh $trailing; close $fh; print "Output file written.\n"; }
How to get trailing data of gzip archive?
1,294,409,653,000
Can I use file and magic ( http://linux.die.net/man/5/magic ) to override the description of some other known formats ? for example, I would like to describe the following formats: BED: http://genome.ucsc.edu/FAQ/FAQformat.html#format1 Fasta : http://en.wikipedia.org/wiki/FASTA_format ... that are 'just' text file Or BAM http://genome.ucsc.edu/FAQ/FAQformat.html#format5.1 that is 'just' a gzipped-file starting with the magic-number BAM\1 ? do you know any example ? Is it possible to provide a custom C code to test the file instead of using the magic format ?
You can use the -m option to specify an alternate list of magic files, and if you include your own before the compiled magic file (/usr/share/file/magic.mgc on my system) in that list, those patterns will be tested before the "global" ones. You can create a function, or an alias, to transparently always transparently use that option by just issuing the file command. The language used in magic file is quite powerful, so there is seldom a need to revert to custom C coding. The only time I felt inclined to do so was in the 90's when matching HTML and XML files was difficult because there was no way (at that time) to have the flexible casing and offset matching necessary to be able to parse <HTML and < Html and < html with one pattern. I implemented that in C as modifier to the 'string' pattern, allowing the ignoring of case and compacting of (optional) blanks. These changes in C required adaptation of the magic files as well. And unless the file source code has significantly changed since then, you will always need to modify (or provide extra) rules in magic files that match those C code changes. So you might as well start out trying to do it with changes to the magic files only, and fall back to changing the C code if that really doesn't work out.
file(1) and magic(5) : describing other formats
1,294,409,653,000
Is there any command line tool available on Linux to check a PDF file version?
Yes. The file command covers this $ file x1.pdf x1.pdf: PDF document, version 1.7 $ To get greater detail, you could try pdfinfo, part of popper-utils. $ pdfinfo x1.pdf Title: Full page photo Author: steve Producer: Microsoft: Print To PDF CreationDate: Fri Apr 5 10:14:34 2019 ModDate: Fri Apr 5 10:14:34 2019 Tagged: no UserProperties: no Suspects: no Form: none JavaScript: no Pages: 9 Encrypted: no Page size: 841.5 x 594.75 pts Page rot: 0 File size: 5424973 bytes Optimized: no PDF version: 1.7 $
Check PDF file version from command line