date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,409,341,045,000 |
I can see that it's possible to insert a file into a Docker container via insert:
docker insert IMAGE URL PATH
Is there a way to fetch the contents of a file in a Docker container and save them somewhere on the host operating system? I'd like to extract the value of a configuration file and store it on my host operating system.
|
Depends on which version of the docker tool you're using:
The current version has an ordinary 'cp' command, according to cp doc v0.6.3:
Usage: docker cp CONTAINER:PATH HOSTPATH
Copy files/folders from the containers filesystem to the host
path. Paths are relative to the root of the filesystem.
For older version, you may consider to use 'export' as from export doc v0.5.3:
Usage: docker export CONTAINER
Export the contents of a filesystem as a tar archive
(There may also be other options, based on capabilities of your container.)
| Pull a file from a Docker container? |
1,409,341,045,000 |
I'm not exactly sure if this is a "right" question to post here. I'm probably asking more about "opinions" than actual categorical answers (of those that either work or don't, and that's it).
I was wondering what name separators would be the most linux friendly (or more specifically, Bash friendly) and human friendly at the same time.
Let's say I want to create a directory that contains something related to Mr. Foo Bar (Foo being the first name, Bar being the last name)
Having the name "Foo-Bar/" is very convenient. - is a "regular" character, it doesn't need to be escaped, it clearly shows that Foo and Bar are two separate things... Nice.
Now, "Foo.Bar" is a bit trickier. Someone may think that Foo.Bar is actually a file (at a first glance, specially if you don't have terminals with coloring enabled) where "Foo" is the filename, and "Bar" the extension.
I could also use "Foo Bar", but then I need to escape the whitespace when I want to access the directory and, if I want to list the contents of the parent directory (where Foo Bar is located) and put said list in a bash array, the white space is going to cause trouble (a lot). Not nice.
Brackets () also cause a lot of issues. They also need to be escaped, then to cause trouble with commands as scp... Not nice.
So... the question (at last) is: If you need to make the name of a file clear and meaningful at a fist glance, and you need to use separators, what do you use?
|
Short answer: "Foo_-_Bar"
Long answer:
To make things easy to spot I tend to use a sequence of characters in areas where this is needed, the idea is to use something that is easy to spot so you get that this is the separator maybe something like "_-_" or "___".
A practical example where I use this is my mp3 collection where the filenames contain artist and song title, and sometimes the sequence number. And if you use a magic sequence to separate them it is easy both for the eye and for the scripts. The mp3 example could look something like this.
01_Blue_Man_Group_-_Above.mp3
02_Blue_Man_Group_-_Time_to_Start.mp3
03_Blue_Man_Group_-_Sing_Along.mp3
Now this can be translated into your example if Foo and Bar is two logical things that should not mix, and that could then be Foo_-_Bar.
| Good style/practices for separators in file (or directory) names [closed] |
1,409,341,045,000 |
I don't know why I can't use env array variable inside a script ?
In my ~/.bashrc or ~/.profile
export HELLO="ee"
export HELLOO=(aaa bbbb ccc)
in a shell :
> echo $HELLO
ee
> echo $HELLOO
aaa
> echo ${HELLOO[@]}
aaa bbbb ccc
in a script :
#!/usr/bin/env bash
echo $HELLO
echo $HELLOO
echo ${HELLOO[@]}
---
# Return
ee
Why ?
|
A bash array can not be an environment variable as environment variables may only be key-value string pairs.
You may do as the shell does with its $PATH variable, which essentially is an array of paths; turn the array into a string, delimited with some particular character not otherwise present in the values of the array:
$ arr=( aa bb cc "some string" )
$ arr=$( printf '%s:' "${arr[@]}" )
$ printf '%s\n' "$arr"
aa:bb:cc:some string:
Or neater,
arr=( aa bb cc "some string" )
arr=$( IFS=:; printf '%s' "${arr[*]}" )
export arr
The expansion of ${arr[*]} will be the elements of the arr array separated by the first character of IFS, here set to :. Note that if doing it this way, the elements of the string will be separated (not delimited) by :, which means that you would not be able to distinguish an empty element at the end, if there was one.
An alternative to passing values to a script using environment variables is (obviously?) to use the command line arguments:
arr=( aa bb cc )
./some_script "${arr[@]}"
The script would then access the passed arguments either one by one by using the positional parameters $1, $2, $3 etc, or by the use of $@:
printf 'First I got "%s"\n' "$1"
printf 'Then I got "%s"\n' "$2"
printf 'Lastly there was "%s"\n' "$3"
for opt in "$@"; do
printf 'I have "%s"\n' "$opt"
done
| Unable to use an Array as environment variable |
1,409,341,045,000 |
I want change 120 user's password. so I wrote sudo echo 'user:passwd' | chpasswd
but I had a message,
chpasswd: (user) pam_chauthtok() failed, error:
Authentication token manipulation error
chpaswd (line 1, user) password not changed
and also I tried another way using textfile, but I had same the message.
I can't solve this problem.
|
The usual way to change the password is to use the passwd(1) command.
If you want to use chpasswd(8) or usermod(8) you should carefully RTFM.
Be sure that the given password is compatible with the system configuration. And sudo should apply to the chpasswd command, so you probably want
echo 'user:passwd' | sudo chpasswd
In your case, sudo echo 'user:passwd' | chpasswd,
the sudo is applied only to echo, which is incorrect.
| I can't change user's passwd on Ubuntu |
1,409,341,045,000 |
I've enabled compression (mounted with compress=lzo) for my btrfs partition and used it for a while.
I'm curious about how much benefit the compression brought me and am interested in the saved space value (sum of all file sizes) - (actual used space).
Is there any straightforward way to get this value, or would I have to write a script that sums up e.g. df output and compres it to btrfs filesystem dfoutput?
|
In Debian/Ubuntu:
apt install btrfs-compsize
compsize /mnt/btrfs-partition
In Fedora:
dnf install compsize
compsize /mnt/btrfs-partition
output is like this:
Processed 123574 files, 1399139 regular extents (1399139 refs), 69614 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 73% 211G 289G 289G
none 100% 174G 174G 174G
lzo 32% 37G 115G 115G
It requires root (sudo) to work at all (otherwise SEARCH_V2: Operation not permitted).
It can be used on any directory (totalling the subtree), not just the whole filesystem from the mountpoint.
On a system with zstd, but some old files still compressed with lzo, there will be rows for each of them. (The Perc column is the disk_size / uncompressed_size for that row, not how much of the total is compressed that way. Smaller is better.)
| btrfs: how to calculate btrfs compression space savings? |
1,409,341,045,000 |
I'm reading a book on network programming with Go. One of the chapters deals with the /etc/services file. Something I noticed while exploring this file is that certain popular entries like HTTP and SSH, both of which use TCP at the transport layer, have a second entry for UDP. For example on Ubuntu 14.04:
ubuntu@vm1:~$ grep ssh /etc/services
ssh 22/tcp # SSH Remote Login Protocol
ssh 22/udp
ubuntu@vm1:~$ grep http /etc/services
http 80/tcp www # WorldWideWeb HTTP
http 80/udp # HyperText Transfer Protocol
Anyone know why these have two entries? I don't believe SSH or HTTP ever use UDP (confirmed by this question for SSH).
|
Basically, it's because that was the tradition from way back when port numbers started being assigned through until approximately 2011. See, for example, §7.1 “Past Principles” of RFC 6335:
TCP and UDP ports were simultaneously assigned when either was
requested
It's possible they will be un-allocated someday, of course, as ports 1023 and below are the "system ports", treated specially by most operating systems, and most of that range is currently assigned.
And, by the way, HTTP/3 runs over UDP. Though it can use any UDP port, not just 80/443. So really those are still unused.
As far as Debian is concerned, its /etc/services already had 22/udp in 1.0 (buzz 1996).
It was however removed in this commit in 2016, first released in version 5.4 of the netbase package.
As of writing, the latest stable version of Debian (buster) has 5.6. And the latest Ubuntu LTS (18.04, bionic) netbase package is based on Debian netbase 5.4 and you can see its changelog also mentions the removal of udp/22.
| Why do popular TCP-using services have UDP as well as TCP entries in /etc/services? |
1,409,341,045,000 |
I would like to have a list of all the timezones in my system's zoneinfo database (note : system is a debian strecth linux)
The current solution I have is : list all paths under /usr/share/zoneinfo/posix, which are either plain files or symlinks
cd /usr/share/zoneinfo/posix && find * -type f -or -type l | sort
I am not sure, however, that each and every known timezone is mapped to a path under this directory.
Question
Is there a command which gives the complete list of timezones in the system's current zoneinfo database ?
|
On Debian 9, your command gave me all of the timezones listed here: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
Additionally, systemd provides timedatectl list-timezones, which outputs a list identical to your command.
As far as I know, the data in tzdata is provided directly from IANA:
This package contains data required for the implementation of
standard local time for many representative locations around the
globe. It is updated periodically to reflect changes made by
political bodies to time zone boundaries, UTC offsets, and
daylight-saving rules.
So just keep the tzdata package updated.
| How to list timezones known to the system? |
1,409,341,045,000 |
I wanted to create a file named test. Accidentally I run mkdir test instead of touch test.
Is it possible to convert test directory in a file named test?
What about converting a file named test into a directory with the same name?
|
This isn't possible for any Unix or Linux I've ever touched.
Under some older Unixes, directories are files, specially marked, but still files. You used to be able to read a directory with cat under SunOS, for example. A lot of modern filesystems might have directories as B+ trees, or some other on-disk data structure. So turning a file into a directory or vice versa would always require a delete and a re-creation with the same name.
| Transform a directory into file or a file into directory |
1,409,341,045,000 |
The BSD route command has a feature that will show what route will be selected for a given host. For example:
/Users/mhaase $ route get google.com
route to: iad23s07-in-f8.1e100.net
destination: iad23s07-in-f8.1e100.net
gateway: 10.36.13.1
interface: en0
flags: <UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE,IFREF>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 1500 0
I occasionally find this useful if I am manually messing with routing tables to make sure that the routing rules are working as expected.
The GNU version of route does not have this same "get" subcommand. Is there some equivalent or alternative for GNU/Linux?
|
There is
ip route get 74.125.137.100
but it doesn't do hostname resolution (which I think is a good thing). The command is usually available from iproute or iproute2 packages.
| What is the GNU/Linux equivalent of BSD's "route get ..."? |
1,409,341,045,000 |
I have a .CSV file with the below format:
"column 1","column 2","column 3","column 4","column 5","column 6","column 7","column 8","column 9","column 10
"12310","42324564756","a simple string with a , comma","string with or, without commas","string 1","USD","12","70%","08/01/2013",""
"23455","12312255564","string, with, multiple, commas","string with or, without commas","string 2","USD","433","70%","07/15/2013",""
"23525","74535243123","string , with commas, and - hypens and: semicolans","string with or, without commas","string 1","CAND","744","70%","05/06/2013",""
"46476","15467534544","lengthy string, with commas, multiple: colans","string with or, without commas","string 2","CAND","388","70%","09/21/2013",""
5th column of the file has different strings. I need to filter out the file based on the 5th column value. Lets say, I need a new file from the current file which has records only with the value "string 1" in its fifth field.
For this I tried the below command,
awk -F"," ' { if toupper($5) == "STRING 1") PRINT }' file1.csv > file2.csv
but it was throwing me an error as following:
awk: { if toupper($5) == "STRING 1") PRINT }
awk: ^ syntax error
awk: { if toupper($5) == "STRING 1") PRINT }
awk: ^ syntax error
I then used the following which gives me an odd output.
awk -F"," '$5="string 1" {print}' file1.csv > file2.csv
Output:
"column 1" "column 2" "column 3" "column 4" string 1 "column 6" "column 7" "column 8" "column 9" "column 10
"12310" "42324564756" "a simple string with a comma" string 1 without commas" "string 1" "USD" "12" "70%" "08/01/2013" ""
"23455" "12312255564" "string with string 1 commas" "string with or without commas" "string 2" "USD" "433" "70%" "07/15/2013" ""
"23525" "74535243123" "string with commas string 1 "string with or without commas" "string 1" "CAND" "744" "70%" "05/06/2013" ""
"46476" "15467534544" "lengthy string with commas string 1 "string with or without commas" "string 2" "CAND" "388" "70%" "09/21/2013" ""
P.S: I used the toupper command to be on the safe side, as I am not sure if the string will be in lower or higher case. I need to know what is wrong with my code and if the space in the string matters while searching for a pattern using AWK.
|
awk -F '","' 'BEGIN {OFS=","} { if (toupper($5) == "STRING 1") print }' file1.csv > file2.csv
Output
"12310","42324564756","a simple string with a , comma","string with or, without commas","string 1","USD","12","70%","08/01/2013",""
"23525","74535243123","string , with commas, and - hypens and: semicolans","string with or, without commas","string 1","CAND","744","70%","05/06/2013",""
I think This is What you want.
| Filter a .CSV file based on the 5th column values of a file and print those records into a new file |
1,409,341,045,000 |
What's the difference between ATTRS and ATTR in udev rules? Aren't both attributes? Why the distinction?
|
The singular form of udev criteria (KERNEL, SUBSYSTEM, DRIVER, ATTR) only match the device being added, whereas the plural forms (including ATTRS) match the device being added or any of its parent devices.
See the Writing udev rules guide for details.
| Difference between ATTRS and ATTR in udev rules? |
1,409,341,045,000 |
I have an embedded device. It is ARM based, with Linux 2.6.31 and has 256 MB RAM.
I wanted to get some ideas/tips on what is the most effective way to test the device RAM for data corruptions. Is there a test/software that someone can suggest?
Note:
I have memtester now. I got it after suggestion from Ulrich Dangel (below).
I also have mtest set up from the uboot now.
Any other tests / approaches I could use?
|
Das U-Boot is perhaps the most widely used boot loader on ARM boards, and it includes some memory test features.
Interestingly, its README suggests an alternative approach that might be more portable and/or more effective:
The best known test case to stress a system like that is to boot Linux with root file system mounted over NFS, and then build some larger
software package natively (say, compile a Linux kernel on the system) -
this will cause enough context switches, network traffic (and thus DMA transfers from the network controller), varying RAM use, etc. to trigger any weak spots in this area.
While you're building the linux kernel, you might be interested in the CONFIG_MEMTEST=y option, which causes the built-in memory test to be built. This used to be for x86 architecture only, but I believe recent versions support it on other architectures as well, perhaps even ARM.
The memtester tool is already built and available in some linux distributions, for various architectures, including ARM.
The kernel-memtest project might interest you as well.
Bear in mind that no tool can test the memory that it's running from (so a program in a running OS will have significant blind spots) and basic read/write tests won't reveal every type of defect or other error. Set your expectations accordingly, and if you have reason to suspect bad memory, consider trying several different test tools.
| How can I test the RAM for data corruption on an ARM-based system? |
1,300,264,276,000 |
I have been working in embedded OS like uCOS, ThreadX. While I have coded apps in Linux, now I’m planning to start learning Linux Kernel. I have few questions regarding the environment.
Which is best distro, which has easy to use tools for kernel development? (so far I had used RHEL and Fedora. While I am comfortable with these, it also looks like Ubuntu has in-built scripts for easy kernel compilation like make_kpkg, etc)
Can you describe the best setup for kernel debugging? While debugging other embedded OSes, I have used serial port to dump progress, JTAG, etc. Which kind of setup does the Linux kernel devs use? (Will my testbed PC with serial port is enough for my needs? If yes, how to configure the kernel to dump to serial port?) I'm planning to redirect kernel messages to serial console which will be read in my laptop.
What tool is best for debugging and tracing kernel code? As mentioned earlier, is serial console the only way? Or any IDE/JTAG kind of interface exists for PC?
|
My personal flavor for Linux Kernel development is Debian. Now for your points:
As you probably guessed Ubuntu doesn't bring anything new to the kernel to ease development afaik, apart from what's already available in Debian. For e.g. make_kpkg is a Debian feature and not Ubuntu. Here are some links to get you started on common Linux Kernel development tasks in Debian:
Chapter 4 - Common kernel-related tasks of Debian Linux Kernel Handbook
Chapter 10 - Debian and the kernel of The Debian GNU/Linux FAQ
The easiest way to do kernel debugging is using QEMU and GDB. Some links to get you started:
http://files.meetup.com/1590495/debugging-with-qemu.pdf
http://www.cs.rochester.edu/~sandhya/csc256/assignments/qemu_linux.html
Though, you should be aware that this method is not viable for certain scenarios like specific hardware issues debugging and such, for which you would be better of using physical serial debugging and real hardware. For this you can use KGDB(it works using ethernet too). KDB is also a good choice. Oh, and by the way, both KGDB and KDB have been merged into the Linux Kernel. More on those two here.
Another cool method, which works marvelously for non-hardware related issues, is using the User-mode Linux Kernel. Running the Kernel in user-mode as any other process allows you to debug it just like any other program(examples). More on User-mode Linux here. UML is part of the Linux Kernel since 2.6.0 , thus you can build any official kernel version above that into UML mode by following these steps.
See item 2. Unfortunately there is no ultimate best method here, since each tool/method has its pro's and con's.
| Kernel Hacking Environment |
1,300,264,276,000 |
What is the purpose and benefit of using the --system option when adding a
user, or even a group?
I'd like to know why I'm seeing this added to many Docker containers and recommended as a
best practice?
For example sake I'm adding a non-root user to an Alpine Docker container
for use when developing and again for runtime.
The current versions I'm using are:
adduser version is 3.118, and the Alpine adduser man
Alpine version is 3.12
Docker v19.03.13 on Windows 10 (20H2 update)
The man page reads "Create a system user", O.K. but what do you get as a system
user? Or being in a system group when using addgroup -S.
I do not have a System Admin background, so I'm not sure what that means and
would like clarity as to when I should use this?
Some Other Reading
Searching Google has provide some insight but no way to verify what I've read.
That it does not ask you to set a password for the user, but then I can use use
--disabled-password for that.
I then found this post here, I got that its for organization purposes, but does
not help me much either. I'm only a little bit more clearm, yet not confident
enough to explain when to use them.
What's the difference between a normal user and a system user?
|
System users are a like normal users but for are set an organizational purpose.
The only difference is :
They don't have an expiry date ( no aging set )
Their uids are below 999 like set on /etc/login.defs (can be changed) :
Also there is Standard System Users which come with the OS or with a package install most of them have the above attributes ( Conventional ):
The majority of them have /sbin/nologin or /bin/false as a shell
They have "*" or "!!" in /etc/shadow meaning that none can simply use them.
And can have attributes that i have shown on the first section.
To check these standard system users list : /usr/share/doc/setup-/uidgid
An example could be by adding mypapp user as a system user ; so in case for example we want to setup Identity Access Management policy in our environment that we can automate for all users ; we have to do it only for system users based on their uids because in case of mypapp account expires the application will stop running.
| What does the `--system` flag for **adduser** do? |
1,300,264,276,000 |
CentOS 6.4
I'm trying to better understand how filesystem ACL rules are processed and in what order ACL rules apply.
For example, let's say users bob and joe belong to a group called sales. Let's also say that I have a sales document with the following details:
[root@Maui ~]# getfacl /home/foo/docs/foo.txt
getfacl: Removing leading '/' from absolute path names
# file: home/foo/docs/foo.txt
# owner: jane
# group: executives
user:: r--
user:bob:rw-
user:joe:rwx
group:sales:rwx
group::r--
mask::rwx
other::---
My question is, how are permissions processed in an example like this and what access privileges take precedence?
Is there just a top-down search and whatever rule matches first is the one that applies?
Or does Linux enforce access based on what is the most specific rule for the user in question? Or perhaps the most restrictive and applicable rule takes precedence?
|
This is kind of a broad topic and a little too much to cover here. I'll refer you to the POSIX Access Control Lists on Linux whitepaper put together by Andreas Grünbacher of the SuSE Labs. It does a pretty good job of covering the subject and breaking it down so you understand how ACLs work.
Your example
Now let's take a look at your example and break it down.
group (sales)
members of sales group (bob, joe)
Now let's break down the permissions on file /home/foo/docs/foo.txt. ACLs also encapsulate the same permissions that most people should be familiar with on Unix, mainly the User, Group, and Other bits. So let's pull those out first.
user:: r--
group::r--
other::---
These would typically look like this in an ls -l:
$ ls -l /home/foo/docs/foo.txt
-r--r----- 1 jane executives 24041 Sep 17 15:09 /home/foo/docs/foo.txt
You can see who owns the file and what the group is with these ACL lines:
# owner: jane
# group: executives
So now we get into the nitty gritty of ACLs:
user:bob:rw-
user:joe:rwx
group:sales:rwx
This is showing that user bob has rw, while user joe has rwx. There is also a group which also has rwx similar to joe. These permissions are as if the user column in our ls -l output had 3 owners (jane, bob, and joe) as well as 2 groups (executives & sales). There is no distinction other than they are ACLs.
Lastly the mask line:
mask::rwx
In this case we're not masking anything, it's wide open. So if users bob and joe have these lines:
user:bob:rw-
user:joe:rwx
Then those are their effective permissions. If the mask were like this:
mask::r-x
Then their effective permissions would be like this:
user:bob:rw- # effective:r--
user:joe:rwx # effective:r-x
This is a powerful mechanism for curtailing permissions that are granted in a wholesale way.
NOTE: The file owner and others permissions are not affected by the effective rights mask; all other entries are! So with respect to the mask, the ACL permissions are second class citizens when compared to the traditional Unix permissions.
References
getfacl(1) - Linux man page
POSIX Access Control Lists on Linux whitepaper
| How are ACL permissions processed and in what order do they apply to a given user action? |
1,300,264,276,000 |
I have used many times ab for measuring web performance, hdparm for measuring hard disk performance and netperf for measuring network performance.
But I didn't find any tools to measure cpu performance.
Do you know a tool allowing to measure cpu performance? I am more specifically looking to measure Gflops.
|
You should take a look at the Wikipedia page on benchmarking, it gives quite a few benchmark tools including the CPU ones that will work on Linux. LinPack is free but a pain to compile. But you can certainly look at NBench and some others in the list.
| Is there an open source tool to measure cpu performance? |
1,300,264,276,000 |
Is it risky to rename folder with 180GB with the mv command?
We have a folder /data that contain 180GB.
We want to rename the /data folder to /BD_FILES with the mv command.
Is it safe to do that?
|
Changing the name on a folder is safe, if it stays within the same file system.
If it is a mount point (/data kinda looks like it could be a mount point to me, check this with mount), then you need to do something other than just a simple mv since mv /data /BD_FILES would move the data to the root partition (which may not be what you want to happen).
You should unmount the filesystem, rename the now empty directory, update /etc/fstab with the new location for this filesystem, and then remount the filesystem at the renamed location.
In other words,
umount /data
mv /data /BD_FILES (assuming /BD_FILES doesn't already exist, in that case, move it out of the way first)
update /etc/fstab, changing the mount point from /data to /BD_FILES
mount /BD_FILES
This does not involve copying any files around, it just changes the name of the directory that acts as the mount point for the filesystem.
If the renaming of the directory involves moving it to a new file system (which would be the case if /data is on one disk while /BD_FILES is on another disk, a common thing to do if you're moving things to a bigger partition, for example), I'd recommend copying the data while leaving the original intact until you can check that the copy is ok. You may do this with
rsync -a /data/ /BD_FILES/
for example, but see the rsync manual for what this does and does not do (it does not preserve hard links, for example).
Once the folder is renamed, you also need to make sure that existing procedures (programs and users using the folder, backups etc.) are aware of the name change.
| renaming a huge folder: is it risky? |
1,300,264,276,000 |
I have a tar that was generated on a linux machine. I need to upload part of that tar to another linux machine. The full tar is huge and will take hours to upload. I am now on a Mac OSX machine and this is my problem:
I extract the tar to a folder and locate what I need to upload to the new server
I create a smaller tar containing just what I want to upload.
I upload and extract that to the new linux machine
When I look the server it is full of ._ files. For every file uploaded there is a ._ file, like text1.txt, ._text1.txt, text2.txt, ._text2.txt...
OSX is including these files on the tar.
I have tried to do this
tar --exclude='._*' -cvf newTar .
without difference.
I do not have ssh access to the new server now.
What can I do to solve that? How do I generate a clean tar.
|
To my understanding, tar --exclude='._*' -cvf newTar . should work: Finder creates the ._* files but newTar shouldn't contain them.
But you can completely bypass those files by invoking tar in passthrough mode. For example, to copy only the files from oldTar that are under some/path, use
tar -cf newTar --include='some/path/*' @oldTar
| A lot of "._" files inside a tar [duplicate] |
1,300,264,276,000 |
When I try to get the week number for Dec 31, it returns 1. When I get the week number for Dec 30, I get 52 --- which is what I would expect. The day Monday is correct. This is on a RPI running Ubuntu.
$ date -d "2018-12-30T1:58:55" +"%V%a"
52Sun
$ date -d "2018-12-31T1:58:55" +"%V%a"
01Mon
Same issue without time string
$ date -d "2018-12-31" +"%V%a"
01Mon
|
This is giving you the ISO week which begins on a Monday.
The ISO week date system is effectively a leap week calendar system that is part of the ISO 8601 date and time standard issued by the International Organization for Standardization (ISO) since 1988 (last revised in 2004) and, before that, it was defined in ISO (R) 2015 since 1971. It is used (mainly) in government and business for fiscal years, as well as in timekeeping. This was previously known as "Industrial date coding". The system specifies a week year atop the Gregorian calendar by defining a notation for ordinal weeks of the year.
An ISO week-numbering year (also called ISO year informally) has 52 or 53 full weeks. That is 364 or 371 days instead of the usual 365 or 366 days. The extra week is sometimes referred to as a leap week, although ISO 8601 does not use this term.
Weeks start with Monday. Each week's year is the Gregorian year in which the Thursday falls. The first week of the year, hence, always contains 4 January. ISO week year numbering therefore slightly deviates from the Gregorian for some days close to 1 January.
If you want to show 12/31 as week 52, you should use %U, which does not use the ISO standard:
$ date -d "2018-12-31T1:58:55" +"%V%a"
01Mon
$ date -d "2018-12-31T1:58:55" +"%U%a"
52Mon
| Date Command Gives Wrong Week Number for Dec 31 |
1,300,264,276,000 |
How can I use, preferably a single chmod command, which will allow any user to create a file in a directory but only the owner of their file (the user who created it) can delete their own file but no one else's in that directory.
I was thinking to use:
chmod 755 directory
As the user can create a file and delete it, but won't that allow the user to delete other people's files?
I only want the person who created the file to be able to delete their own file. So, anyone can make a file but only the person who created a file can delete that file (in the directory).
|
The sticky bit can do more or less what you want. From man 1 chmod:
The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp.
That is, the sticky bit's presence on a directory only allows contained files to be renamed or deleted if the user is either the file's owner or the containing directory's owner (or the user is root).
You can apply the sticky bit (which is represented by octal 1000, or t) like so:
# instead of your chmod 755
chmod 1777 directory
# or, to add the bit to an existing directory
chmod o+t directory
| Allow all users to create files in a directory, but only the owner can delete |
1,300,264,276,000 |
I know this question isn't very new but it seems as if I didn't be able to fix my problem on myself.
ldd generate the following output
u123@PC-Ubuntu:~$ ldd /home/u123/Programme/TestPr/Debug/TestPr
linux-vdso.so.1 => (0x00007ffcb6d99000)
libcsfml-window.so.2.2 => not found
libcsfml-graphics.so.2.2 => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fcebb2ed000)
/lib64/ld-linux-x86-64.so.2 (0x0000560c48984000)
Which is the correct way to tell ld the correct path?
|
if your libraries are not on standard path then either you need to add them to the path or add non-standard path to LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<Your_non-Standard_path>
Once you done any one of above things then you need to update the dynamic linker run-time binding by executing below command:
sudo ldconfig
UPDATE:
You can make the changes permanent by either writing the above export line into one of your startup files (e.g. ~/.bashrc) OR if the underlying library is not conflicting with any other library then put into one of standard library path (e.g. /lib,/usr/lib)
| ldd does not find path, How to add |
1,300,264,276,000 |
I found information that nvram is used for BIOS flashing/backup and that it contains some bios related data. Would cat /dev/random > /dev/nvram permanently brick computer? I'm quite tempted to type this command but somehow I feel it's not gonna end up well for my machine so I guess I'd like to know how dangerous is playing with this device.
|
I'm curious as to exactly why you'd want to run such a command if you think it might damage your computer...
/dev/nvram provides access to the non-volatile memory in the real-time clock on PCs and Ataris. On PCs this is usually known as CMOS memory and stores the BIOS configuration options; you can see the information stored there by looking at /proc/driver/nvram:
Checksum status: valid
# floppies : 4
Floppy 0 type : none
Floppy 1 type : none
HD 0 type : ff
HD 1 type : ff
HD type 48 data: 65471/255/255 C/H/S, precomp 65535, lz 65279
HD type 49 data: 3198/255/0 C/H/S, precomp 0, lz 0
DOS base memory: 630 kB
Extended memory: 65535 kB (configured), 65535 kB (tested)
Gfx adapter : monochrome
FPU : installed
All this is handled by the nvram kernel module, which takes care of checksums etc. Most of the information here is only present for historical reasons, and reflects the limitations of old operating systems: the computer I ran this on doesn't have four floppy drives, the hard drive information is incorrect, as is the memory information and display adapter information.
I haven't tried writing random values to the device, but I suspect it wouldn't brick your system: at worst, you should be able to recover by clearing the CMOS (there's usually a button or jumper to do that on your motherboard). But I wouldn't try it!
The only useful features in the CMOS memory nowadays are RTC-related. In particular, nvram-wakeup can program the CMOS alarm to switch your computer on at a specific time. (So that would be one reason to write to /dev/nvram.)
| Is /dev/nvram dangerous to write to? |
1,300,264,276,000 |
In some Bourne-like shells, the read builtin can not read the whole line from file in /proc (the command below should be run in zsh, replace $=shell with $shell with other shells):
$ for shell in bash dash ksh mksh yash zsh schily-sh heirloom-sh "busybox sh"; do
printf '[%s]\n' "$shell"
$=shell -c 'IFS= read x </proc/sys/fs/file-max; echo "$x"'
done
[bash]
602160
[dash]
6
[ksh]
602160
[mksh]
6
[yash]
6
[zsh]
6
[schily-sh]
602160
[heirloom-sh]
602160
[busybox sh]
6
read standard requires the standard input need to be a text file, does that requirement cause the varied behaviors?
Read the POSIX definition of text file, I do some verification:
$ od -t a </proc/sys/fs/file-max
0000000 6 0 2 1 6 0 nl
0000007
$ find /proc/sys/fs -type f -name 'file-max'
/proc/sys/fs/file-max
There's no NUL character in content of /proc/sys/fs/file-max, and also find reported it as a regular file (Is this a bug in find?).
I guess the shell did something under the hood, like file:
$ file /proc/sys/fs/file-max
/proc/sys/fs/file-max: empty
|
The problem is that those /proc files on Linux appear as text files as far as stat()/fstat() is concerned, but do not behave as such.
Because it's dynamic data, you can only do one read() system call on them (for some of them at least). Doing more than one could get you two chunks of two different contents, so instead it seems a second read() on them just returns nothing (meaning end-of-file) (unless you lseek() back to the beginning (and to the beginning only)).
The read utility needs to read the content of files one byte at a time to be sure not to read past the newline character. That's what dash does:
$ strace -fe read dash -c 'read a < /proc/sys/fs/file-max'
read(0, "1", 1) = 1
read(0, "", 1) = 0
Some shells like bash have an optimisation to avoid having to do so many read() system calls. They first check whether the file is seekable, and if so, read in chunks as then they know they can put the cursor back just after the newline if they've read past it:
$ strace -e lseek,read bash -c 'read a' < /proc/sys/fs/file-max
lseek(0, 0, SEEK_CUR) = 0
read(0, "1628689\n", 128) = 8
With bash, you'd still have problems for proc files that are more than 128 bytes large and can only be read in one read system call.
bash also seems to disable that optimization when the -d option is used.
ksh93 takes the optimisation even further so much as to become bogus. ksh93's read does seek back, but remembers the extra data it has read for the next read, so the next read (or any of its other builtins that read data like cat or head) doesn't even try to read the data (even if that data has been modified by other commands in between):
$ seq 10 > a; ksh -c 'read a; echo test > a; read b; echo "$a $b"' < a
1 2
$ seq 10 > a; sh -c 'read a; echo test > a; read b; echo "$a $b"' < a
1 st
| Why some shells `read` builtin fail to read the whole line from file in `/proc`? |
1,300,264,276,000 |
I roughly know about the files located under /dev.
I know there are two types (character/block), accessing these files communicates with a driver in the kernel.
I want to know what happens if I delete one -- specifically for both types of file. If I delete a block device file, say /dev/sda, what effect -- if any --
does this have? Have I just unmounted the disk?
Similarly, what if I delete /dev/mouse/mouse0 -- what happens? Does the mouse stop working? Does it automatically replace itself?
Can I even delete these files? If I had a VM set up, I'd try it.
|
Those are simply (special) files. They only serve as "pointers" to the actual device. (i.e. the driver module inside the kernel.)
If some command/service already opened that file, it already has a handle to the device and will continue working.
If some command/service tries to open a new connection, it will try to access that file and fail because of "file not found".
Usually those files are populated by udev, which automatically creates them at system startup and on special events like plugging in a USB device, but you could also manually create those using mknod.
| What happens if you delete a device file? |
1,300,264,276,000 |
In Unix based operating systems are UTF-8 filenames permissible? If so do I need to do anything special to write the file to disk?
Let me explain what I'm hoping to do. I'm writing an application that will transfer a file via FTP to a remote system, but the filename is dynamically set to via some set of metadata which potentially could be in UTF-8. I'm wondering if there's something I need to do to write the file to disk in Unix/Linux.
Also as a follow up, does anyone know what would happen if I did upload a UTF-8 filename to a system doesn't support UTF-8?
|
On Unix/Linux, a filename is a sequence of any bytes except for a slash or a NUL. A slash separates path components, and a NUL terminates a path name.
So, you can use whatever encoding you want for filenames. Some applications may have trouble with some encodings if they are naïve about what characters may be in filenames - for example, poorly-written shell scripts often do not handle filenames with spaces.
Modern Unix/Linux environments handle UTF-8 encoded filenames just fine.
| UTF 8 filenames? |
1,300,264,276,000 |
Why does Linux run well on so many different types of machines - desktops, laptops, servers, embedded devices, mobile phones, etc? Is it mainly because the system is open, so any part of it can be modified to work in different environments? Or are there other properties of the Linux kernel and/or system that make it easier for this OS to work on such a wide range of platforms?
|
While openness is certainly part of it, I think the key factor is Linus Torvald's continued insistence that all of the work, from big to small, has a place in the mainline Linux kernel, as long as it's well done. If he'd decided at some point to draw a line and say "okay, for that fancy super-computer hardware, we need a fork", then completely separate high-end and small-system variants might have developed. As it is, instead people have done the harder work of making it all play together relatively well.
And, kludges which enable one side of things at the detriment of the other aren't, generally, allowed in — again, forcing people to solve problems in a harder but more correct way, which turns out to usually be easier to go forward from once whatever required the kludge becomes a historical footnote.
From an interview several years ago:
Q: Linux is a versatile system. It
supplies PC, huge servers, mobiles and
ten or so of other devices. From your
privileged position, which sector will
be the one where Linux will express
the highest potential?
A: I think the real power of Linux is
exactly that it is not about one
niche. Everybody gets to play along,
and different people and different
companies have totally different
motivations and beliefs in what is
important for them. So I’m not even
interested in any one particular
sector.
| Why does Linux scale so well to different hardware platforms? |
1,300,264,276,000 |
Background:
Since I develop python programs that must run on different python versions, I have installed different versions of python on my computer.
I am using FC 13 so it came with python 2.6 pre-installed in /usr/bin/python2.6 and /usr/lib/python2.6.
I installed python 2.5 from source, and to keep things neat, I used the --prefix=/usr option, which installed python in /usr/bin/python2.5 and /usr/lib/python2.5.
Now, when I run python my prompt shows I am using version 2.5. However, I am having some issues with the install.
Package management:
Using easy_install, packages are always installed in /usr/lib/python2.6/site-packages/. I downloaded setuptools .egg for python 2.5 and tried to install it, but it gives me an error:
/usr/lib/python2.5/site-packages does NOT support .pth files
It seems that python2.5 is not in my PYTHONPATH. I thought the default install would add itself to the PYTHONPATH, but when I write echo $PYTHONPATH at promt, I just receive an empty line.
|
The recommended way of having multiple Python versions installed is to install each from source - they will happily coexist together. You can then use virtualenv with the appropriate interpreter to install the required dependencies (using pip or easy_install). The trick to easier installation of multiple interpreters from source is to use:
sudo make altinstall
instead of the more usual "sudo make install". This will add the version number to the executable (so you'd have python-2.5, python-2.6, python-3.2 etc) thus preventing any conflicts with the system version of Python.
| Using different versions of Python |
1,300,264,276,000 |
How can I recursively remove the EXIF info from several thousand JPG files?
|
The other ExifTool suggestions are great if you want to remove or change specific sections. But if you want to just remove all of the metadata completely, use this (from the man page):
exiftool -all= dst.jpg
Delete all meta information from an image.
You could also use jhead, with the -de flag:
-de Delete the Exif header entirely. Leaves other metadata
sections intact.
Note that in both cases, EXIF is only one type of metadata. Other metadata sections may be present, and depending on what you want to do, both of these programs have different options for preserving some or removing it all. For example, jhead -purejpg strips all information not needed for rendering the image.
| Batch delete exif info |
1,300,264,276,000 |
I currently have a shell script running on a linux server which is using wget in oder to download a remote web page. This in turn is executed by a cron job which is scheduled to run at certain times.
Can someone please confirm that adding in the -q option will not only stop all output being returned to the console, but will also stop all attempts by wget to write to the logs or to try and create a log file?
|
With -q option, wget itself should not output anything to either console nor the logfile specified by -o option, except for the case described by Michał. The logfile however will be created (if -o was supplied).
This however does not guarantee that no system daemons will notice the fact that wget was run - the network activity can be independently monitored by other tools.
| Does -q definitely turn off wget output logging? |
1,300,264,276,000 |
In order to change both a file's owner and group we can do this:
chown trump file
chgrp trump file
but can I do both commands in one approach or one command?
|
per chown man page chown user:group file:
chown trump:trump file
| How to perform chown and chgrp in one command |
1,300,264,276,000 |
I have a pen drive and one partition:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part /
sdb 8:16 1 7.5G 0 disk
└─sdb1 8:17 1 7.5G 0 part
and I have formatted with command:
# mkfs.fat -n A /dev/sdb
and it works fine.
But after then, I skimmed though the man page for mkfs:
mkfs is used to build a Linux filesystem on a device, usually a hard
disk partition. The device argument is either the device name (e.g.
/dev/hda1, /dev/sdb2), or a regular file that shall contain the
filesystem. The size argument is the number of blocks to be used for
the filesystem.
It says mkfs should work with partition number. And my problem is why my operation works without error prompt?
|
Creating a filesystem on a whole disk rather than a partition is possible, but unusual. The documentation only explicitly mentions the partition because that's the most usual case (it does say usually). You can create a filesystem on anything that acts sufficiently like a fixed-size file, i.e. something where if you write data at a certain location and read back from the same location then you get back the same data. This includes whole disks, disk partitions, and other kinds of block devices, as well as regular files (disk images).
After doing mkfs.fat -n A /dev/sdb, you no longer have a partition on that disk. Beware that the kernel still thinks that the disk has a partition, because it keeps the partition table cached in memory. But you shouldn't try to use /dev/sdb1 anymore, since it no longer exists; writing to it would corrupt the filesystem you created on /dev/sdb since /dev/sdb1 is a part of /dev/sdb (everything except a few hundred bytes at the beginning). Run the command partprobe as root to tell the kernel to re-read the partition table.
While creating a filesystem on a whole disk is possible, I don't recommend it. Some operating systems may have problems with it (I think Windows would cope but some devices such as cameras might not), and you lose the possibility of creating other partitions. See also The merits of a partitionless filesystem
| Is it ok to mkfs without partition number? |
1,300,264,276,000 |
I often use the command
cat /dev/urandom | strings --bytes 1 | tr -d '\n\t ' | head --bytes 32
to generate pseudo-random passwords. This doesn't work with /dev/random.
Specifically
cat /dev/urandom | strings --bytes 1 | tr -d '\n\t ' produces output
cat /dev/random | strings --bytes 1 produces output
cat /dev/random | strings --bytes 1 | tr -d '\n\t ' does not produce output
NB: When using /dev/random you may have to wiggle your mouse or press keys (e.g. ctrl, shift, etc.) to generate entropy.
Why does the last example not work? Does tr have some kind of large internal buffer that /dev/urandom fills quickly but /dev/random doesn't?
P.S. I'm using CentOS 6.5
cat /proc/version
Linux version 2.6.32-431.3.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Jan 3 21:39:27 UTC 2014
|
It will eventually.
In:
cat /dev/random | strings --bytes 1 | tr -d '\n\t '
cat will never buffer, but it's superfluous anyway as there's nothing to concatenate here.
< /dev/random strings --bytes 1 | tr -d '\n\t '
strings though, since its output is not longer a terminal will buffer its output by blocks (of something like 4 or 8kB) as opposed to lines when the output goes to a terminal.
So it will only start writing to stdout when it has accumulated 4kB worth of characters to output, which on /dev/random is going to take a while.
tr output goes to a terminal (if you're running that at a shell prompt in a terminal), so it will buffer its output line-wise. Because you're removing the \n, it will never have a full line to write, so instead, it will write as soon as a full block has been accumulated (like when the output doesn't go to a terminal).
So, tr is likely not to write anything until strings has read enough from /dev/random so as to write 8kB (2 blocks possibly much more) of data (since the first block will probably contain some newline or tab or space characters).
On this system I'm trying this on, I can get an average of 3 bytes per second from /dev/random (as opposed to 12MiB on /dev/urandom), so in the best case scenario (the first 4096 bytes from /dev/random are all printable ones), we're talking 22 minutes before tr starts to output anything. But it's more likely going to be hours (in a quick test, I can see strings writing a block every 1 to 2 blocks read, and the output blocks contain about 30% of newline characters, so I'd expect it'd need to read at least 3 blocks before tr has 4096 characters to output).
To avoid that, you could do:
< /dev/random stdbuf -o0 strings --bytes 1 | stdbuf -o0 tr -d '\n\t '
stdbuf is a GNU command (also found on some BSDs) that alters the stdio buffering of commands via an LD_PRELOAD trick.
Note that instead of strings, you can use tr -cd '[:graph:]' which will also exclude tab, newline and space.
You may want to fix the locale to C as well to avoid possible future surprises with UTF-8 characters.
| Reading from /dev/random does not produce any data |
1,300,264,276,000 |
I'm in the process of installing postgresql onto a second server
Previously I installed postgresql and then used the supplied script
./contrib/start-scripts/linux
Placed into the correct dir
# cp ./contrib/start-scripts/linux /etc/rc.d/init.d/postgresql92
# chmod 755 /etc/rc.d/init.d/postgresql92
Which I could then execute as expected with
# service postgresql92 start
However the new machine is using Systemd and it looks like there is a completely different way to do this
I don't want to hack at this and ruin something so I was wondering if anyone out there could point me in the right direction of how to achieve the same result
|
When installing from source, you will need to add a systemd unit file that works with the source install. For RHEL, Fedora my unit file looks like:
/usr/lib/systemd/system/postgresql.service
[Unit]
Description=PostgreSQL database server
After=network.target
[Service]
Type=forking
User=postgres
Group=postgres
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
# ... but allow it still to be effective for child processes
# (note that these settings are ignored by Postgres releases before 9.5)
Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
Environment=PG_OOM_ADJUST_VALUE=0
# Maximum number of seconds pg_ctl will wait for postgres to start. Note that
# PGSTARTTIMEOUT should be less than TimeoutSec value.
Environment=PGSTARTTIMEOUT=270
Environment=PGDATA=/usr/local/pgsql/data
ExecStart=/usr/local/pgsql/bin/pg_ctl start -D ${PGDATA} -s -w -t ${PGSTARTTIMEOUT}
ExecStop=/usr/local/pgsql/bin/pg_ctl stop -D ${PGDATA} -s -m fast
ExecReload=/usr/local/pgsql/bin/pg_ctl reload -D ${PGDATA} -s
# Give a reasonable amount of time for the server to start up/shut down.
# Ideally, the timeout for starting PostgreSQL server should be handled more
# nicely by pg_ctl in ExecStart, so keep its timeout smaller than this value.
TimeoutSec=300
[Install]
WantedBy=multi-user.target
Then enable the service on startup and start the PostgreSQL service:
$ sudo systemctl daemon-reload # load the updated service file from disk
$ sudo systemctl enable postgresql
$ sudo systemctl start postgresql
| Systemd postgresql start script |
1,300,264,276,000 |
I am trying to use systemd's EnvironmentFile and add an option to the command when it is set in the file. I have the following in the unit file:
ExecStart=/usr/bin/bash -c "echo ${PORT:+port is $PORT}"
which doesn't echo anything when I start the service.
The following works as expected:
ExecStart=/usr/bin/bash -c "echo port is $PORT"
which means that the file is read correctly.
Parameter substitution also works on command line:
$ PORT=1234 bash -c 'echo ${PORT:+port is $PORT}'
port is 1234
What am I missing?
|
systemd does its own minimalistic shell-style command line parsing of the contents of ExecStart= and other parameters. This minimalistic parsing supports basic environment variable substitution but apparently not things like ${PORT:+port is $PORT}.
You will want to prevent systemd from doing that and let the invoked shell handle it.
From the documentation:
To pass a literal dollar sign, use "$$".
So try this:
ExecStart=/usr/bin/bash -c "echo $${PORT:+port is $$PORT}"
Or better yet, try this:
ExecStart=/bin/sh -c "echo $${PORT:+port is $$PORT}"
because the type of variable expansion you are doing here is POSIX standard and is not a bash-ism. By using /bin/sh instead of bash you will remove an unnecessary dependancy on bash.
| Why does bash parameter expansion not work inside systemd service files? |
1,300,264,276,000 |
On Unix, a long time back, I learned about chmod:
the traditional way to set permissions on Unix
(and to allow programs to gain privileges, using setuid and setgid).
I have recently discovered some newer commands on GNU/Linux:
setfacl extends the traditional ugo:rwx bits and the t bit of chmod.
setcap gives more fine-grained control over privileges
than the ug:s bits of chmod.
chattr allows some other controls (a bit of a mix) over the file.
Are there any others?
|
chmod: change file mode bits
Usage (octal mode):
chmod octal-mode files...
Usage (symbolic mode):
chmod [references][[operator][modes]] files...
references is a combination of the letters ugoa,
which specify which user's access to the files will be modified:
u the user who owns it
g other users in the file's group
o other users not in the file's group
a all users
If omitted, it defaults to all users,
but only permissions allowed by the umask are modified.
operator is one of the characters +-=:
+ add the specified file mode bits
to the existing file mode bits of each file
- removes the specified file mode bits
from the existing file mode bits of each file
= adds the specified bits and removes unspecified bits, except the setuid and setgid bits set for directories, unless explicitly specified.
mode consists of a combination of the letters rwxXst, which specify which permission bits are to be modified:
r read
w write
x (lower case X) execute (or search for directories)
X (capital) execute/traverse only if the file is a directory
or already has an execute bit set for some user category
s setuid or setgid (depending on the specified references)
t restricted deletion flag or sticky bit
Alternatively, the mode can consist of one of the letters ugo,
in which case case the mode corresponds to the permissions
currently granted to the owner (u), members of the file's group (g)
or users in neither of the preceding categories (o).
The various bits of chmod explained:
Access control (see also setfacl)
rwx — read (r), write (w), and execute/traverse (x) permissions
Read (r) affects if a file can be read, or if a directory can be listed.
Write (w) affects if a file can be written to,
or if a directory can be modified (files added, deleted, renamed).
Execute (x) affects if a file can be run,
use for scripts and other executable files.
Traverse (x), also known as "search",
affects whether a directory can be traversed;
i.e., whether a process can access (or try to access) file system objects
through entries in this directory.
s and t — sticky bit (t), and setgid (s) on directories
The sticky bit only affects directories. Will prevent anyone except file owner, and root, from deleting files in the directory.
The setgid bit on directories will cause new files and directories
to have the group set to the same group,
and new directories to have their setgid bit set
(see also defaults in setfacl).
s — setuid, setgid, on executable files
This can affect security in a bad way, if you don't know what you are doing.
When an executable is run, if one of these bits is set,
then the user/group of the executable
will become the effective user/group of the process.
Thus the program runs as that user.
See setcap for a more modern way to do this.
chown chgrp:
chattr: change file attributes
Usage:
chattr operator[attribute] files...
operator is one of the characters +-=:
+ adds the selected attributes to be to the existing attributes of the files
- removes the selected attributes
= overwrites the current set of attributes the files have with the specified attributes.
attribute is a combination of the letters acdeijmstuxACDFPST,
which correspond to the attributes:
a append only
c compressed
d no dump
e extent format
i immutable
j data journaling
m don't compress
s secure deletion
t no tail-merging
u undeletable
x direct access for files
A no atime updates
C no copy on write
D synchronous directory updates
F case-insensitive directory lookups
P project hierarchy
S synchronous updates
T top of directory hierarchy
There are restrictions on the use of many of these attributes.
For example, many of them can be set or cleared only
by the superuser (i.e., root) or an otherwise privileged process.
setfattr: change extended file attributes
Usage (set attribute):
setfattr -n name -v value files...
Usage (remove):
setfattr -x name files...
name is the name of the extended attribute to set or remove
value is the new value of the extended attribute
setfacl: change file access control lists
Usage:
setfacl option [default:][target:][param][:perms] files...
option must include one of the following:
--set set the ACL of a file or a directory, replacing the previous ACL
-m|--modify modify the ACL of a file or directory
-x|--remove remove ACL entries of a file or directory
target is one of the letters ugmo (or the longer forms shown below):
u, users permission of a named user identified by param,
defaults to file owner UID if omitted
g, group permission of a named group identified by param,
default to owning group GID if omitted
m, mask effective rights mask
o, other permissions of others
perms is a combination of the letters rwxX, which correspond to the permissions:
r read
w write
x execute
X execute only if the file is a directory or already has execute permission for some user
Alternatively, perms may be an octal digit (0-7) indicating the set of permissions.
setcap: change file capabilities
Usage:
setcap capability-clause file
A capability-clause consists of a comma-separated list of capability names followed by a list of operator-flag pairs.
The available operators are =, + and -. The available flags are e, i and p which correspond to the Effective, Inheritable and Permitted capability sets.
The = operator will raise the specified capability sets and reset the others. If no flags are given in conjunction with the = operator all the capability sets will be reset. The + and - operators will raise or lower the one or more specified capability sets respectively.
chcon: change file SELinux security context
Usage:
chcon [-u user] [-r role] [-t type] files...
user is the SELinux user, such as user_u, system_u or root.
role is the SELinux role (always object_r for files)
type is the SELinux subject type
chsmack: change SMACK extended attributes
SMACK is Simplified Mandatory Access Control Kernel.
Usage:
chsmack -a value file
value is the SMACK label to be set for the SMACK64 extended file attribute
setrichacl: change rich access control list
richacls are a feature that will add more advanced ACLs.
Currently a work in progress, so I can not tell you much about them. I have not used them.
See also this question Are there more advanced filesystem ACLs beyond traditional 'rwx' and POSIX ACL?
and man page
| What are the different ways to set file permissions, etc., on GNU/Linux? |
1,300,264,276,000 |
I am in the EU zone +1 or when DST is on, +2. Now is Sunday 31.3.2024. This Sunday morning, we changed into DST, at 2→3 (CET → CEST).
I have Linux servers in separate networks reporting monthly energy consumption at 23:58 of the last day of the month. They have worked perfectly for many years. Networks are up to 8 hours travel apart!
Each server has an RTC and syncs NTP regularly. There are many safeguards if anything is suspicious, and three core servers in each network constantly check against each other whether everything, including the time, is OK. Yes, I am paranoid. I tolerate no (0) bugs. My systems work perfectly stable for many years. Worked, sorry.
Last night, at 23:58 of March 30th, all of my Raspberry Pi servers decided that the March 30th is followed by the 1st of the next month! I verified 7 (seven) different and independent ways per each network, that everything did really occur at March 30th within the minute 23:58! Confirmation includes 9 separate devices ignoring DST, plus an external US and an EU mail provider.
At 23:58 of each day, my servers do a bash test that can apparently go wrong many ways:
(( $(date -d tomorrow +"%-d") == 1 )) && ZadnjiDanMjeseca=1 || ZadnjiDanMjeseca=""
I do not see any way for this test to go wrong! I hope I am wrong. At this moment at 31.3.2024. Everything works perfectly fine, and zdump is correct! (The following example appears misformatted for me. There must be four distinct rows shown.)
hwclock; date; date -d tomorrow +"%-d"
2024-03-31 19:22:23.311664+02:00
ned, 31.03.2024. 19:22:23 CEST
1
The only way I can explain this issue is: The Linux distribution I use, Raspbian, somehow has two separate kernel functions calculating time. One of them, sadly, missed the fact that this is a leap year! Ups!
But, if so, why should it be limited to this Raspberry Pi OS version and kernel? Hell, Microsoft failed to make Excel calculate leap years correctly!
Over this weekend, I have seen several institutions (banks, our national tax office...) in my country having date related issues and closing the shop, too, so it seems not to be limited to Raspbian.
This goes against me as a programmer, but, I can find no alternative explanation. Hopefully, I am wrong...
Before posting, I made a switch from 23:58 to the 00:03 on the 1st, and where I must log data at 23:58 of the last day of the month, I test by adding 300 seconds, not asking for +1 day. These solutions should work.
So it is not buried in the comments: As I never use such calculations, I made a basic error in expecting "tomorrow" actually means tomorrow! It doesn't make any sense to me; it means +1 day. So, I have expected these two commands produce the same output:
> date -d 'next tue'
uto, 2.04.2024. 00:00:00 CEST
> date -d 'tomorrow'
uto, 2.04.2024. 10:24:55 CEST
instead of having to type:
date -d "tomorrow 0"
|
Well, I get:
# date -s '2024-03-30 23:58'; date -d tomorrow
Sat Mar 30 23:58:00 EET 2024
Mon Apr 1 00:58:00 EEST 2024
which isn't really surprising since 24 hours after 2024-03-30 23:58 it is 2024-04-01 00:58. Just that the latter is in daylight savings time.
The manual says:
Relative items adjust a date (or the current date if none) forward or backward. The effects of relative items accumulate.
[...]
More precise units are [...], ‘day’ worth 24 hours,
[...]
The string ‘tomorrow’ is worth one day in the future (equivalent to ‘day’),
[...]
When a relative item causes the resulting date to cross a boundary where the clocks were adjusted, typically for daylight saving time, the resulting date and time are adjusted accordingly.
The way to avoid issues like that is to ask for a time that's not close to midnight.
E.g. noon-ish usually falls on the right day:
# date -s '2024-03-30 23:58'; date -d '+12 hours'
Sat Mar 30 23:58:00 EET 2024
Sun Mar 31 12:58:00 EEST 2024
And this also seems to work:
# date -s '2024-03-30 23:58'; date -d '12:00 tomorrow'
Sat Mar 30 23:58:00 EET 2024
Sun Mar 31 12:00:00 EEST 2024
| How could March 30th 2024 be followed by the 1st? |
1,300,264,276,000 |
In my terminal it printed out a seemingly random number 127. I think it is printing some variable's value and to check my suspicion, I defined a new variable v=4. Running echo $? again gave me 0 as output.
I'm confused as I was expecting 4 to be the answer.
|
From man bash:
$? Expands to the exit status of the most recently executed foreground pipeline.
echo $? will return the exit status of last command. You got 127 that is the exit status of last executed command exited with some error (most probably). Commands on successful completion exit with an exit status of 0 (most probably). The last command gave output 0 since the echo $v on the line previous finished without an error.
If you execute the commands
v=4
echo $v
echo $?
You will get output as:
4 (from echo $v)
0 (from echo $?)
Also try:
true
echo $?
You will get 0.
false
echo $?
You will get 1.
The true command does nothing, it just exits with a status code 0; and the false command also does nothing, it just exits with a status code indicating failure (i.e. with status code 1).
| What does echo $? do? [duplicate] |
1,300,264,276,000 |
This is the contents of the /home3 directory on my system:
./ backup/ hearsttr@ lost+found/ randomvi@ sexsmovi@
../ freemark@ investgr@ nudenude@ romanced@ wallpape@
I want to clean this up but I am worried because of the symlinks, which point to another drive.
If I say rm -rf /home3 will it delete the other drive?
|
rm -rf /home3 will delete all files and directory within home3 and home3 itself, which include symlink files, but will not "follow"(de-reference) those symlink.
Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch.
| If I rm -rf a symlink will the data the link points to get erased, too? |
1,300,264,276,000 |
I want to make a bash script to delete the older file form a folder. Every time when I run the script will be deleted only one file, the older one. Can you help me with this?
Thanks
|
As Kos pointed out, It might not be possible to know the oldest file (as per creation date).
If modification time are good for you, and if file name have no new line:
rm -- "$(ls -rt | head -n 1)"
(if you don't have control over the file names beware of parsing ls output)
Add the -A option to ls if hidden files are also to be considered.
Since GNU coreutils 9.0, GNU ls now has a --zero option to list file names NUL-delimited, so with those recent versions and bash or zsh as the shell, a version that would work with any file name would be:
IFS= read -rd '' file < <(ls --zero -rt) &&
rm -- "$file"
Those newer versions also have a --time=birth which can sort by birth time on some systems and filesystems (including recent GNU/Linux ones on most native filesystems).
| Bash script to remove the oldest file from from a folder |
1,300,264,276,000 |
In "https://stackoverflow.com/questions/13038143/how-to-get-pids-in-one-process-group-in-linux-os" I see all answers mentioning ps and none mentioning /proc.
"ps" seems to be not very portable (Android and Busybox versions expect different arguments), and I want to be able list pids with pgids with simple and portable tools.
In /proc/.../status I see Tgid: (thread group ID), Gid: (group id for security, not for grouping processes together), but not PGid:...
What are other (not using ps) ways of getting pgid from pid?
|
You can look at field 5th in output of /proc/[pid]/stat.
$ ps -ejH | grep firefox
3043 2683 2683 ? 00:00:21 firefox
$ < /proc/3043/stat sed -n '$s/.*) [^ ]* [^ ]* \([^ ]*\).*/\1/p'
2683
From man proc:
/proc/[pid]/stat
Status information about the process. This is used by ps(1). It is defined in /usr/src/linux/fs/proc/array.c.
The fields, in order, with their proper scanf(3) format specifiers, are:
pid %d The process ID.
comm %s The filename of the executable, in parentheses. This is visible whether or not the executable is swapped out.
state %c One character from the string "RSDZTW" where R is running, S is sleeping in an interruptible wait, D is waiting in
uninterruptible disk sleep, Z is zombie, T is traced or stopped (on a signal), and W is paging.
ppid %d The PID of the parent.
pgrp %d The process group ID of the process.
session %d The session ID of the process.
Note that you cannot use:
awk '{print $5}'
Because that file is not a blank separated list. The second field (the process name may contain blanks or even newline characters). For instance, most of the threads of firefox typically have space characters in their name.
So you need to print the 3rd field after the last occurrence of a ) character in there.
| Is it possible to get process group ID from /proc? |
1,300,264,276,000 |
Is there a kernel module or some other patch or something similar to Windows' ReadyBoost?
Basically I'm looking for something that allows disk reads to be cached on a Flash drive.
|
Bcache could be exactly what you're looking for:
Bcache is a Linux kernel block layer cache. It allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or more slower hard disk drives.
I'm eagerly awaiting its inclusion into Linux mainline, but unfortunately it's still not quite there.
Some nice and readable info is also available here:
http://lwn.net/Articles/394672/
http://www.phoronix.com/scan.php?page=news_item&px=MTEwMDg
Try it out and see how it works on your system!
| Linux equivalent to ReadyBoost? |
1,300,264,276,000 |
I can fire
ls -lrt
to get files and folders sorted by modification date, but this does not separate directories from files. I want ls to show me first directories by modification date and then files by modification date. How to do that?
|
what about something like this:
ls -ltr --group-directories-first
| How to sort results from ls command by modification date (directories first)? |
1,300,264,276,000 |
I am installing an SSD and would like to put / on the SSD and /home, /var, and /tmp on the HDD. My current distro is Kubuntu but I would not mind trying another distro if this procedure can be accomplished easier there. I have installed many different Linux OSes on multiple partitions, however I know of no installer that lets one mount multiple directories on a single partition. I would rather not use three separate partitions as particularly /home, /var, and /tmp are prone to large changes in size and it is not practical to allot each of them some arbitrary maximum.
Note that I am discussing a new install, not moving the current system to the SSD / HD split.
|
There are two approaches you can use. For either approach, you need first mount your hard disk partition wherever (for example, under /hd) and also add it to /etc/fstab, then create home, var, and tmp inside the mount.
Use symlinks. Then create symlinks from /home to /hd/home, etc.
Instead of symlinks, use bind mounts. Syntax is mount --bind /hd/home /home. You can (should) also put that in fstab, using 'bind' as the fstype.
The basic way to get it to install like that is to set up the target filesystem by hand before starting the actual install. I know its easy enough with debian-installer to use the installer to create your partitions, mount, and then switch to a different terminal (say, alt-f2), cd into /target, and create your symlinks (or bind mounts). Then switch back to alt-f1 and continue the install. Ubuntu's (and I assume Kubuntu's) installers are based on debian-installer, so I assume similar is possible.
| How to mount multiple directories on the same partition? |
1,300,264,276,000 |
A hacker has dropped a file in my tmp dir that is causing issues. Nothing malicious except creating GB's of error_log entries because their script is failing. However, the file they are using to execute has no permissions and even as ROOT I can't delete or rename this file.
---------- 1 wwwusr wwwusr 1561 Jan 19 02:31 zzzzx.php
root@servername [/home/wwwusr/public_html/tmp]# rm zzzzx.php
rm: remove write-protected regular file './zzzzx.php'? y
rm: cannot remove './zzzzx.php': Operation not permitted
I have also tried removing by inode
root@servername [/home/wwwusr/public_html/tmp]# ls -il
...
1969900 ---------- 1 wwwusr wwwusr 1561 Jan 19 02:31 zzzzx.php
root@servername [/home/wwwusr/public_html/tmp]# find . -inum 1969900 -exec rm -i {} \;
rm: remove write-protected regular file './zzzzx.php'? y
rm: cannot remove './zzzzx.php': Operation not permitted
How do I delete this file?
|
The file has probably been locked using file attributes.
As root, do
lsattr zzzzx.php
Attributes a (append mode) or i (immutable) present would prevent your rm. If they're there, then
chattr -ai zzzzx.php
rm zzzzx.php
should delete your file.
| How do I remove a file with no permissions? |
1,300,264,276,000 |
As for the "Spectre" security vulnerability, "Retpoline" was introduced to be a solution to mitigate the risk. However, I've read a post that mentioned:
If you build the kernel without CONFIG_RETPOLINE, you can't build modules with retpoline and then expect them to load — because the thunk symbols aren't exported.
If you build the kernel with the retpoline though, you can successfully load modules which aren't built with retpoline. (Source)
Is there an easy and common/generic/unified way to check if kernel is "Retpoline" enabled or not? I want to do this so that my installer can use the proper build of kernel module to be installed.
|
If you’re using mainline kernels, or most major distributions’ kernels, the best way to check for full retpoline support (i.e. the kernel was configured with CONFIG_RETPOLINE, and was built with a retpoline-capable compiler) is to look for “Full generic retpoline” in /sys/devices/system/cpu/vulnerabilities/spectre_v2. On my system:
$ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2
Mitigation: Full generic retpoline, IBPB, IBRS_FW
If you want more comprehensive tests, to detect retpolines on kernels without the spectre_v2 systree file, check out how spectre-meltdown-checker goes about things.
| How to check if Linux kernel is "Retpoline" enabled or not? |
1,300,264,276,000 |
I implemented my own Serial-ATA Host-Bus-Adapter (HBA) in VHDL and programmed it onto a FPGA. A FPGA is chip which can be programmed with any digital circuit. It's also equipped with serial transceivers to generate high speed signals for SATA or PCIe.
This SATA controller supports SATA 6 Gb/s line rates and uses ATA-8 DMA-IN/OUT commands to transfer data in up to 32 MiB chunks to and from the device. The design is proven to work at maximum speed (e.g. Samsung SSD 840 Pro -> over 550 MiB/s).
After some tests with several SSD and HDD devices, I bought a new Seagate 6 TB Archive HDD (ST6000AS0002). This HDD reaches up to 190 MiB/s read performance, but only 30 to 40 MiB/s write performance!
So I dug deeper and measured the transmitted frames (yes that's possible with a FPGA design). As far as I can tell, the Seagate HDD is ready to receive the first 32 MiB of a transfer in one piece. This transfer happens at maximum line speed of 580 MiB/s. After that, the HDD stalls the remaining bytes for over 800 ms! Then the HDD is ready to receive the next 32 MiB and stalls again for 800 ms. All in all an 1 GiB transfer needs over 30 seconds, which equals to circa 35 MiB/s.
I assume that this HDD has a 32 MiB write cache, which is flushed in between the burst cycles. Data transfers with less than 32 MiB don't show this behavior.
My controller uses DMA-IN and DMA-OUT command to transfer data. I'm not using the QUEUED-DMA-IN and QUEUED-DMA-OUT command, which are used by NCQ capable AHCI controllers. Inplementing AHCI and NCQ on a FPGA platform is very complex and not needed by my application layer.
I would like to reproduce this scenario on my Linux PC, but the Linux AHCI driver has NCQ enabled by default. I need to disable NCQ, so I found this website describing how to disable NCQ, but it doesn't work.
The Linux PC still reaches 190 MiB/s write performance.
> dd if=/dev/zero of=/dev/sdb bs=32M count=32
1073741824 bytes (1.1 GB) copied, 5.46148 s, 197 MB/s
I think there is a fault in the article from above: Reducing the NCQ queue depth to 1 does not disable NCQ. It just allows the OS the use only one queue. It can still use QUEUED-DMA-** commands for the transfer. I need to realy disable NCQ so the driver issues DMA-IN/OUT commands to the device.
So here are my questions:
How can I disable NCQ?
If NCQ queue depth = 1, is Linux's AHCI driver using QUEUED-DMA-** or DMA-** commands?
How can I check if NCQ is disable, because changing /sys/block/sdX/device/queue_depth is not reported in dmesg?
|
Thanks to @frostschutz, I could measure the write performance in Linux without NCQ feature. The kernel boot parameter libata.force=noncq disabled NCQ completely.
Regarding my Seagate 6TB write performance problem, there was no change in speed. Linux still reaches 180 MiB/s.
But then I had another idea:
The Linux driver does not use transfers of 32 MiB chunks. The kernel buffer is much smaller, especially if NCQ with 32 queues is enabled (32 queues * 32 MiB => 1 GiB AHCI buffer).
So I tested my SATA controller with 256 KiB transfers and voilà, it's possible to reach 185 MiB/s.
So I guess the Seagate ST6000AS0002 firmware is not capable of handling big ATA burst transfers. The ATA standard allows up to 65.536 logical blocks, which equals 32 MiB.
SMR - Shingled Magnetic Recording
Another possibility for the bad write performance could be the shingled magnetic recording technique, which is used by Seagate in these archive devices. Obviously, I triggered a rare effect with my FPGA implementation.
| How to (really) disable NCQ in Linux |
1,300,264,276,000 |
In a GUI environment, a drag-and-drop with replace will replace files and entire directories (including contents) with whatever is being copied in. Is there a way to accomplish this same intuitive result with the 'mv' command?
|
Not with mv.
The core function of mv (despite its name) is to rename an object. One of the guarantees that UNIX makes is that renames are atomic -- you're never allowed to see a partially completed rename. This guarantee can be very useful if you want to change a file (/etc/passwd, for example) that other programs may be looking at, and you want them to see either the old or new version of the file, and no other possibility. But a "recursive rename" like you describe would break that guarantee -- you could stop it in the middle and you'd have a half-moved tree and probably a mess -- and so it doesn't really fit in with the philosophy of mv. That's my guess as to why mv -r doesn't exist.
(Never mind that mv breaks that philosophy in other, smaller ways. For example, mv actually does a cp followed by rm when moving files from one filesystem to another.)
Enough philosophy. If you want to recursively move ("drag-drop") a tree from one place to another on the same filesystem, you can get the efficiency and speed of mv as follows (for example):
cp -al source/* dest/ && rm -r source/*
The -l flag to cp means "create a hard link instead of copying" -- it's effectively creating a new filename that points to the same file data as the old filename. This only works on filesystems that support hard links, though -- so any native UNIX-ish filesystem is fine, but it won't work with FAT.
The && means "only run the following command if the preceding command succeeded". If you like, you can run the two commands one at a time instead.
| 'mv' equivalent of drag and drop with replace? |
1,300,264,276,000 |
How can I view the IO priority of a process? like to see for example if something has been ionice-ed.
|
ionice [-p] <pids/>
For example:
$ ionice -p `pidof X`
none: prio 0
This means X is using the none scheduling class (best effort) with priority 0 (highest priority out of 7). Read more with man ionice.
| How do I view the IO priority of a process? |
1,300,264,276,000 |
Due to work I have recently started using OS X and have set it up using homebrew in order to get a similar experience as with Linux.
However, there are quite a few differences in their settings. Some only need to be in place on one system. As my dotfiles live in a git repository, I was wondering what kind of switch I could set in place, so that some configs are only read for Linux system and other for OS X.
As to dotfiles, I am referring, among other, to .bash_profiles or .bash_alias.
|
Keep the dotfiles as portable as possible and avoid OS dependent settings or switches that require a particular version of a tool, e.g. avoid GNU syntax if you don't use GNU software on all systems.
You'll probably run into situations where it's desirable to use system specific settings. In that case use a switch statement with the individual settings:
case $(uname) in
'Linux') LS_OPTIONS='--color=auto --group-directories-first' ;;
'FreeBSD') LS_OPTIONS='-Gh -D "%F %H:%M"' ;;
'Darwin') LS_OPTIONS='-h' ;;
esac
In case the configuration files of arbitrary applications require different options, you can check if the application provides compatibility switches or other mechanisms. For vim, for instance, you can check the version and patchlevel to support features older versions, or versions compiled with a different feature set, don't have. Example snippet from .vimrc:
if v:version >= 703
if has("patch769")
set matchpairs+=“:”
endif
endif
| How to keep dotfiles system-agnostic? |
1,300,264,276,000 |
While going through the "Output of dmesg" I could see a list of values which i am not able to understand properly.
Memory: 2047804k/2086248k available (3179k kernel code, 37232k reserved, 1935k data, 436k init, 1176944k highmem)
virtual kernel memory layout:
fixmap : 0xffc57000 - 0xfffff000 (3744 kB)
pkmap : 0xff800000 - 0xffa00000 (2048 kB)
vmalloc : 0xf7ffe000 - 0xff7fe000 ( 120 MB)
lowmem : 0xc0000000 - 0xf77fe000 ( 887 MB)
.init : 0xc0906000 - 0xc0973000 ( 436 kB)
.data : 0xc071ae6a - 0xc08feb78 (1935 kB)
.text : 0xc0400000 - 0xc071ae6a (3179 kB)
From the values i understand that i have 2GB RAM(Physical memory). But rest of the things seems to be Magic Numbers for me.
I would like to know about each one (fixmap, pkmap,.. etc.) in brief(if more doubts, I will post each one as a separate Question)?
Could someone explain that to me?
|
First off, a 32 bit system has 0xffffffff (4'294'967'295) linear addresses to access a physical location ontop of the RAM.
The kernel divides these addresses into user and kernel space.
User space (high memory) can be accessed by the user and, if necessary, also by the kernel.
The address range in hex and dec notation:
0x00000000 - 0xbfffffff
0 - 3'221'225'471
Kernel space (low memory) can only be accessed by the kernel.
The address range in hex and dec notation:
0xc0000000 - 0xffffffff
3'221'225'472 - 4'294'967'295
Like this:
0x00000000 0xc0000000 0xffffffff
| | |
+------------------------+----------+
| User | Kernel |
| space | space |
+------------------------+----------+
Thus, the memory layout you saw in dmesg corresponds to the mapping of linear addresses in kernel space.
First, the .text, .data and .init sequences which provide the initialization of the kernel's own page tables (translate linear into physical addresses).
.text : 0xc0400000 - 0xc071ae6a (3179 kB)
The range where the kernel code resides.
.data : 0xc071ae6a - 0xc08feb78 (1935 kB)
The range where the kernel data segments reside.
.init : 0xc0906000 - 0xc0973000 ( 436 kB)
The range where the kernel's initial page tables reside.
(and another 128 kB for some dynamic data structures.)
This minimal address space is just large enough to install the kernel in the RAM and to initialize its core data structures.
Their used size is shown inside the parenthesis, take for example the kernel code:
0xc071ae6a - 0xc0400000 = 31AE6A
In decimal notation, that's 3'255'914 (3179 kB).
Second, the usage of kernel space after initialization
lowmem : 0xc0000000 - 0xf77fe000 ( 887 MB)
The lowmem range can be used by the kernel to directly access physical addresses.
This is not the full 1 GB, because the kernel always requires at least 128 MB of linear addresses to implement noncontiguous memory allocation and fix-mapped linear addresses.
vmalloc : 0xf7ffe000 - 0xff7fe000 ( 120 MB)
Virtual memory allocation can allocate page frames based on a noncontiguous scheme. The main advantage of this schema is to avoid external fragmentation, this is used for swap areas, kernel modules or allocation of buffers to some I/O devices.
pkmap : 0xff800000 - 0xffa00000 (2048 kB)
The permanent kernel mapping allows the kernel to establish long-lasting mappings of high-memory page frames into the kernel address space. When a HIGHMEM page is mapped using kmap(), virtual addresses are assigned from here.
fixmap : 0xffc57000 - 0xfffff000 (3744 kB)
These are fix-mapped linear addresses which can refer to any physical address in the RAM, not just the last 1 GB like the lowmem addresses. Fix-mapped linear addresses are a bit more efficient than their lowmem and pkmap colleagues.
There are dedicated page table descriptors assigned for fixed mapping, and mappings of HIGHMEM pages using kmap_atomic are allocated from here.
If you want to dive deeper into the rabbit hole:
Understanding the Linux Kernel
| What does the Virtual kernel Memory Layout in dmesg imply? |
1,300,264,276,000 |
I have a hypothesis: sometimes TCP connections arrive faster than my server can accept() them. They queue up until the queue overflows and then there are problems.
How can I confirm this is happening?
Can I monitor the length of the accept queue or the number of overflows? Is there a counter exposed somewhere?
|
To check if your queue is overflowing use either netstat or nstat
[centos ~]$ nstat -az | grep -i listen
TcpExtListenOverflows 3518352 0.0
TcpExtListenDrops 3518388 0.0
TcpExtTCPFastOpenListenOverflow 0 0.0
[centos ~]$ netstat -s | grep -i LISTEN
3518352 times the listen queue of a socket overflowed
3518388 SYNs to LISTEN sockets dropped
Reference:
https://perfchron.com/2015/12/26/investigating-linux-network-issues-with-netstat-and-nstat/
To monitor your queue sizes, use the ss command and look for SYN-RECV sockets.
$ ss -n state syn-recv sport = :80 | wc -l
119
Reference:
https://blog.cloudflare.com/syn-packet-handling-in-the-wild/
| How can I monitor the length of the accept queue? |
1,300,264,276,000 |
This is OpenSUSE Leap 42. I have a computer with 2x 500 GB SATA HDD drives and to speed it up I put in a small 30GB SSD drive for the system. During installation HDDs were disconnected as they confused the installer (and me). Once system was up I quite easily exchanged the /home directory for a XFS logical volume (I use LVM primarily to add space easily). Then /opt filled up (chrome and botanicula) and I wanted to put that on a volume on HDD. So I created a volume and formatted it with BTRFS. After some head scratching - the @ subvolumes in fstab made me read up on BTRFS I did what I needed - /opt now is 100 GB in size.
But the question is: Does it make sense to format a LVM volume with btrfs? Essentially they both are volume handling systems.
For illustration I paste my fstab (# comments show my edits) and vgscan + lvscan output:
~> cat /etc/fstab
UUID=1b511986-9c20-4885-8385-1cc03663201b swap swap defaults 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af / btrfs defaults 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /boot/grub2/x86_64-efi bt
rfs subvol=@/boot/grub2/x86_64-efi 0 0
UUID=3e103686-52e9-44ac-963f-5a76177af56b /opt btrfs defaults 0 0
#UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /opt btrfs subvol=@/opt 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /srv btrfs subvol=@/srv 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /tmp btrfs subvol=@/tmp 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /usr/local btrfs subvol=@/usr/local 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/crash btrfs subvol=@/var/crash 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/libvirt/images btrfs subvol=@/var/lib/libvirt/images 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/mariadb btrfs subvol=@/var/lib/mariadb 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/mysql btrfs subvol=@/var/lib/mysql 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/log btrfs subvol=@/var/log 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/opt btrfs subvol=@/var/opt 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/spool btrfs subvol=@/var/spool 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=30e20531-b7f1-4bde-b2d2-fab8eeca23af /.snapshots btrfs subvol=@/.snapshots 0 0
UUID=c4c4f819-a548-4881-b854-a0ed62e7952e /home xfs defaults 1 2
#UUID=e14edbfa-ddc2-4f6d-9cba-245d828ba8aa /home xfs defaults 1 2
~>
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "r0data" using metadata type lvm2
Found volume group "r0sys" using metadata type lvm2
# lvscan
ACTIVE '/dev/r0data/homer' [699.53 GiB] inherit
ACTIVE '/dev/r0sys/optr' [100.00 GiB] inherit
After the answer:
Thanks, I understand now the key differences. To me LVM is indeed better for managing space with whatever filesystems on top of it, but BTRFS should be used for features specific to it - mainly snapshots. In simple home network use it is probably better to stay away from it. I've had too much grief managing space on a small drive, but I'd imagine space would be eaten away also on big drives.
|
Maybe this explains ( from the btrfs wiki by the way )
A subvolume in btrfs is not the same as an LVM logical volume or a ZFS subvolume. With LVM, a logical volume is a block device in its own right (which could for example contain any other filesystem or container like dm-crypt, MD RAID, etc.) - this is not the case with btrfs.
A btrfs subvolume is not a block device (and cannot be treated as one) instead, a btrfs subvolume can be thought of as a POSIX file namespace. This namespace can be accessed via the top-level subvolume of the filesystem, or it can be mounted in its own right.
see also https://btrfs.wiki.kernel.org/index.php/FAQ
Interaction with partitions, device managers and logical volumes
Btrfs has subvolumes, does this mean I don't need a logical volume manager and I can create a big Btrfs filesystem on a raw partition?
There is not a single answer to this question. Here are the issues to think about when you choose raw partitions or LVM:
Performance
raw partitions are slightly faster than logical volumes
btrfs does write optimisation (sequential writes) across a filesystem subvolume write performance will benefit from this
algorithm creating multiple btrfs filesystems, each on a different LV,
means that the algorithm can be ineffective (although the kernel will
still perform some optimization at the block device level)
Online resizing and relocating the filesystem across devices: the pvmove command from LVM allows filesystems to move between devices
while online
raw partitions can only be moved to a different starting cylinder while offline
raw partitions can only be made bigger if there is free space after the partition, while LVM can expand an LV onto free space
anywhere in the volume group - and it can do the resize online
subvolume/logical volume size constraints
LVM is convenient for creating fixed size logical volumes (e.g. 10MB for each user, 20GB for each virtual machine image, etc)
subvolumes don't currently enforce such rigid size constraints, although the upcoming qgroups feature will address this issue
.... the FAQ continues to explain the scenario's in which LVM+BTRFS make sense
| Does it make sense to put btrfs on lvm? |
1,446,569,414,000 |
I'm somewhat baffled that inside a Docker container lsof -i doesn't yield any output.
Example (all commands/output from inside the container):
[1] root@ec016481cf5f:/# lsof -i
[1] root@ec016481cf5f:/# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
Please also note how no PID or program name is shown by netstat. fuser also gives somewhat confusing output and is unable to pinpoint the PIDs as well.
Can anyone shed any light on this?
How can I substitute lsof -i (to see the process name as well!)
Why is the output of netstat crippled as well?
NB: The container runs with "ExecDriver": "native-0.1", that is Docker's own execution layer, not LXC.
[1] root@ec016481cf5f:/# fuser -a4n tcp 22
Cannot stat file /proc/1/fd/0: Permission denied
Cannot stat file /proc/1/fd/1: Permission denied
Cannot stat file /proc/1/fd/2: Permission denied
Cannot stat file /proc/1/fd/3: Permission denied
Cannot stat file /proc/1/fd/255: Permission denied
Cannot stat file /proc/6377/fd/0: Permission denied
Cannot stat file /proc/6377/fd/1: Permission denied
Cannot stat file /proc/6377/fd/2: Permission denied
Cannot stat file /proc/6377/fd/3: Permission denied
Cannot stat file /proc/6377/fd/4: Permission denied
22/tcp:
(I am not obsessed by the Permission denied, because that figures. What confuses me is the empty list of PIDs after 22/tcp.)
# lsof|awk '$1 ~ /^sshd/ && $3 ~ /root/ {print}'
sshd 6377 root cwd unknown /proc/6377/cwd (readlink: Permission denied)
sshd 6377 root rtd unknown /proc/6377/root (readlink: Permission denied)
sshd 6377 root txt unknown /proc/6377/exe (readlink: Permission denied)
sshd 6377 root 0 unknown /proc/6377/fd/0 (readlink: Permission denied)
sshd 6377 root 1 unknown /proc/6377/fd/1 (readlink: Permission denied)
sshd 6377 root 2 unknown /proc/6377/fd/2 (readlink: Permission denied)
sshd 6377 root 3 unknown /proc/6377/fd/3 (readlink: Permission denied)
sshd 6377 root 4 unknown /proc/6377/fd/4 (readlink: Permission denied)
sshd 6442 root cwd unknown /proc/6442/cwd (readlink: Permission denied)
sshd 6442 root rtd unknown /proc/6442/root (readlink: Permission denied)
sshd 6442 root txt unknown /proc/6442/exe (readlink: Permission denied)
sshd 6442 root 0 unknown /proc/6442/fd/0 (readlink: Permission denied)
sshd 6442 root 1 unknown /proc/6442/fd/1 (readlink: Permission denied)
sshd 6442 root 2 unknown /proc/6442/fd/2 (readlink: Permission denied)
sshd 6442 root 3 unknown /proc/6442/fd/3 (readlink: Permission denied)
sshd 6442 root 4 unknown /proc/6442/fd/4 (readlink: Permission denied)
sshd 6442 root 5 unknown /proc/6442/fd/5 (readlink: Permission denied)
There is some more output for the connected user, which is correctly identified as well, but that's it. It's apparently impossible to discern of which type (lsof -i limits to internet sockets) a certain "object" is.
|
(NOTE: it is unclear in the question how the asker is entering the docker container. I'm assuming docker exec -it CONTAINER bash was used.)
I had this problem when using a docker image based on centos:7 with docker version 1.9.0 and to overcome this, I just ran:
docker exec --privileged -it CONTAINER bash
Note the inclusion of --privileged.
My naive understanding of the reason this is required: it seems that docker makes an effort to have the container be more "secure", as documented here.
| How can I substitute lsof inside a Docker (native, not LXC-based) |
1,446,569,414,000 |
When doing a installation to a root btrfs filesystem, many Linux distributions install to the default subvolume. If left unmodified, this layout will force any new snapshots or subvolumes to be created inside the root filesystem, which is quite confusing, as the snapshots contain themselves:
/
│─dev
│─home
│─var
│─usr
│─...
└─snapshots
└─snap1
An easier to understand default subvolume layout would be:
/
├─subvolumes
│ └─root
│ ├─dev
│ ├─home
│ ├─var
│ ├─usr
│ └─...
└─snapshots
└─snap1
How can I change the distro-default btrfs installation to use this subvolume layout without booting from a livecd?
|
While not strictly necessary, you might want to do these steps in single user ("recovery") mode to avoid accidental data loss.
We'll create the layout we want in the default subvolume:
mkdir /subvolumes
btrfs subvolume snapshot / /subvolumes/root
mkdir /snapshots
/subvolumes/root will be our new root filesystem, so don't make any changes to the filesystem one after this step.
Edit /subvolumes/root/etc/fstab to make the system use the new root subvolume as root filesystem. For that, you'll need to modify it to include the subvol=/subvolumes/root option.
Now we need to mount our new root filesystem somewhere in order to fix grub to point to the new subvolume:
mkdir /media/temporary
mount -o subvol=/subvolumes/root /dev/sdXX /media/temporary
cd /media/temporary
mount -o bind /dev dev
mount -o bind /sys sys
mount -o bind /proc proc
mount -o bind /boot boot # only necessary if you have a separate boot partition
chroot .
update-grub
exit
That's it. Reboot, and your root filesystem should be the new subvolume. If this succeeded, there shouldn't be any /snapshots directory.
If you want, you may make a permanent mount point for the default subvolume:
mkdir /media/btrfs/root
then you can mount -o subvolid=0 /dev/sdXX /media/btrfs/root to mount the default subvolume.
You can now safely delete the contents of the old root filesystem in the default subvolume.
cd /media/btrfs/root
rm -rf {dev,home,var,...}
| Move a Linux installation using btrfs on the default subvolume (subvolid=0) to another subvolume |
1,446,569,414,000 |
Using Fedora for a small Samba and development server.
|
They're kernel threads.
[jbd2/%s] are used by JBD2 (the journal manager for ext4) to periodically flush journal commits and other changes to disk.
[kdmflush] is used by Device Mapper to process deferred work that it has queued up from other contexts where doing immediately so would be problematic.
| What is [jbd2/dm-3-8] and [kdmflush]? And why are they constantly on iotop? |
1,446,569,414,000 |
Can programs installed under /opt be safely symlinked into /usr/local/bin, which is already in the PATH by default in Ubuntu and other Linux distros?
Alternatively, is there some reason to create a separate /opt/bin and add that to the PATH, as in this answer: Difference between /opt/bin and /opt/X/bin directories?
|
There is a difference between /opt and /usr/local/bin. So just symlinking binaries from one to another would be confusing. I would not mix them up.
/opt is for the installation of add-on application software packages, whereas the /usr/local directory is for the system administrator when installing software locally (with make and make install). /usr/local/bin is intended for binaries from software installed under /usr/local.
According to the File Hierarchy Standard, the correct way would be to add /opt/<package>/bin to the $PATH for each individual package. If this is too painful (when you have an uncountable number of /opt/<package>/bin direcories for example) then you (the local administrator) can create symlinks from /opt/<package>/bin to the /opt/bin directory. This can then be added to the users $PATH once.
| How should executables installed under /opt be added to the path? |
1,446,569,414,000 |
Is there a way to take a disk img file that is broken up into parts and mount it as a single loop device?
|
I don't think you can do it in place but if you have enough space this should work:
# Create the files that will hold your data
dd if=/dev/zero of=part-00 bs=1M count=4k
dd if=/dev/zero of=part-01 bs=1M count=4k
# Create the loop devices
losetup /dev/loop0 part-00
losetup /dev/loop1 part-01
# Create a RAID array
mdadm --create /dev/md0 --level=linear --raid-devices=2 /dev/loop0 /dev/loop1
# Copy the original filesystem
dd if=original-file-00 of=/dev/md0 bs=512
# Look at the records written value
dd if=original-file-01 of=/dev/md0 bs=512 seek=<sum of records written values so far>
# Mount the new filesystem
mount /dev/md0 /mnt
You can't simply create a RAID array from the original files because the RAID disks have a specific header where the number of disks, RAID level, etc is stored. If you do it that part of your original files will be overwritten.
You can use the mdadm --build to create an array without metadata but then you really should make a backup first. Or if read-only mount is enough:
losetup -r /dev/loop0 original-00
losetup -r /dev/loop1 original-11
mdadm --build /dev/md0 --level=linear --raid-devices=2 /dev/loop0 /dev/loop1
mount /dev/md0 /mnt
Why do you want to do this? If your filesystem can't handle >4GB files you should just switch to a sane one.
| Mounting multiple img files as single loop device |
1,446,569,414,000 |
My Arch machine sometimes hangs, suddenly not responding in any way to the mouse or the keyboard. The cursor is frozen. Ctrl-Alt-Backsp won't stop X11, and ctrl-alt-del does exactly nothing. The cpu, network, and disk activity plots in conky and icewm stop updating. In a few minutes the fan turns on. The only way to make the computer do anything at all is to turn off power.
When it boots up, the CPU temperature monitors show 70 to 80C. Before the hang, I was usually doing low-intensity activity like web surfing getting around 50C.
The logs show nothing special compared to a normal shutdown. Memory checker runs fine with zero defects.
How can I investigate why it hung up? Is there extra information I can find for a clue? Is there anything less drastic than power-off to get some kind of action, if only some limited shell or just beeps, but might give a clue?
The machine is a Gateway P6860 17" laptop (bulky but powerful) and it's running Arch 64bit, up to date (as of March 2011). I had Arch for a long time w/o this problem, switched to Ubuntu for about a week then retreated back to a fresh install of Arch. That's when the hangings started.
UPDATE: Yeah, for sure it's overheating. At one temperature, the mouse and keyboard stop working, sometimes becoming functional after several minutes of cooling off. At a higher temperature, worse things happen, like total nonresponsiveness including ignoring SysRq. This condition is shortly followed by a sudden power-off. I have solved the problem by buying a new computer 8D
|
Frederik's answer involving magic SysRq and kernel dumps will work if the kernel is still running, and not truly hung. The kernel might just be busy-looping for some reason.
The fact that it doesn't respond to Ctrl-Alt-Del tells me that probably isn't the case, and that the machine is locking up hard. That means hardware failure, or something closely related, like a bad driver.
Your memory check test is good, if you let it run long enough. You should also try other things to try and stress the system, like StressLinux. Long-running benchmarks are good, too.
Another thing to try is booting the system with an Ubuntu live CD and trying to use the system as normal. If returning to Ubuntu temporarily like that doesn't cause the problem to recur, there's a good chance it's not actually broken hardware, but one of the related things like a bad driver or incorrectly configured kernel. It is quite possible that a more popular distribution like Ubuntu could have a more stable kernel configuration than one like Arch, simply due to the greater number of machines it's been tried on during the distro's test phase.
| How to investigate cause of total hang? |
1,446,569,414,000 |
What is the difference between a "Non-preemptive", "Preemptive" and "Selective Preemptive" Kernel?
Hope someone can shed some light into this.
|
On a preemptive kernel, a process running in kernel mode can be replaced by another process while in the middle of a kernel function.
This only applies to processes running in kernel mode, a CPU executing processes in user mode is considered "idle". If a user mode process wants to request a service from the kernel, he has to issue an exception which the kernel can handle.
As an example:
Process A executes an exception handler, Process B gets awaken by an IRQ request, the kernel replaces process A with B (a forced process switch). Process A is left unfinished. The scheduler decides afterwards if process A gets CPU time or not.
On a nonpreemptive kernel, process A would just have used all the processor time until he is finished or voluntarily decides to allow other processes to interrupt him (a planned process switch).
Today's Linux based operating systems generally do not include a fully preemptive kernel, there are still critical functions which have to run without interruption. So I think you could call this a "selective preemptive kernel".
Apart from that, there are approaches to make the Linux kernel (nearly) fully preemptive.
Real Time Linux Wiki
LWN article
| What is the difference between Non-preemptive, Preemptive and Selective Preemptive Kernel? |
1,446,569,414,000 |
I've noticed that Linux distributions typically have a /dev/disk/by-label directory, but this isn't always the case (For example, the CirrOS Linux test image doesn't have one).
What's required on a Linux system for the /dev/disk/by-label directory to be properly populated?
|
On most modern Linux systems, pretty much everything under /dev is put there by udev.
On my Debian machine, /dev/disk/by-label comes from several files under /lib/udev/rules.d For example, here is a rule from 60-persistent-storage.rules:
ENV{ID_FS_LABEL_ENC}=="?*", ENV{ID_FS_USAGE}=="filesystem|other", \
SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"
A few lines earlier is where ID_FS_LABEL_ENC comes from:
# probe filesystem metadata of disks
KERNEL!="sr*", IMPORT{program}="/sbin/blkid -o udev -p $tempnode"
You can run blkid yourself to see the data its passing to udev:
root@Zia:~# /sbin/blkid -o udev -p /dev/sda2
ID_FS_SEC_TYPE=msdos
ID_FS_LABEL=xfer1
ID_FS_LABEL_ENC=xfer1
ID_FS_UUID=B140-C934
ID_FS_UUID_ENC=B140-C934
ID_FS_VERSION=FAT16
ID_FS_TYPE=vfat
ID_FS_USAGE=filesystem
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_TYPE=0xc
ID_PART_ENTRY_NUMBER=2
ID_PART_ENTRY_OFFSET=257040
ID_PART_ENTRY_SIZE=257040
ID_PART_ENTRY_DISK=8:0
And indeed:
root@Zia:~# ls -l /dev/disk/by-label/xfer1
lrwxrwxrwx 1 root root 10 Nov 19 10:02 /dev/disk/by-label/xfer1 -> ../../sda2
You can put additional rules files in /etc/udev/rules.d/ if you'd like to make additional names for devices, change permissions, etc. E.g., here we have one that populates and sets the permissions on a /dev/disk/for-asm.
| What causes /dev/disk/by-label to be populated? |
1,446,569,414,000 |
I recently removed dbus from my system (along with consolekit and polkit). I didn't notice any change (I was running it as system daemon and per-user from .xinitrc). However, many people claim that one just need dbus, most of linux applications are using it etc etc.
My question is, why do I need it? I don't think I understand what does it exactly do. I know that is a "message bus system", that processes communicate through it etc. And? I still don't know what do I gain from using it. Could someone explain it to me, preferably with examples "from real life"?
|
As an end-user, you don't. There is nothing that D-Bus does that couldn't be done a different way.
The benefits of D-Bus are primarily of interest to developers. It unifies several tricky bits of functionality (object-oriented and type-safe messaging, daemon activation, event notification, transport independence) under a single facility that works the same regardless of what programming language or windowing toolkit is in use.
| Why do I need dbus? |
1,446,569,414,000 |
I'm seeing very high RX dropped packets in the output of ifconfig: Thousands of packets per second, an order of magnitude more than regular RX packets.
wlan0 Link encap:Ethernet HWaddr 74:da:38:3a:f4:bb
inet addr:192.168.99.147 Bcast:192.168.99.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:31741 errors:0 dropped:646737 overruns:0 frame:0
TX packets:18424 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:90393262 (86.2 MiB) TX bytes:2348219 (2.2 MiB)
I'm testing WiFi dongles. Both have this problem, and the one with the higher drop rate actually performs better in ping floods. The one with low dropped packets suffers from extreme Ping RTTs, while the other never skips a beat.
What does Linux consider a dropped packet?
Why am I seeing so many of them?
Why doesn't it seem to affect performance?
There are lots of questions around with answers that say a dropped packet could be one of the following but that doesn't help me very much, because those possibilities don't seem to make sense in this scenario.
|
Packet Dropped seen from ifconfig could be due to many reasons, you should dig deeper into NIC statistics to figure out real reason. Below are some general reasons
NIC ring buffers getting full and unable to cope-up with incoming bursts of traffic
CPU receiving NIC interrupts is very busy and unable to process
some cable/hardware/duplex issues
some bug in NIC driver
Look at the output of
ethtool -S wlan0
iwconfig wlan0
and the content of /proc/net/wireless for any further information.
| What exactly is an ifconfig dropped RX packet? |
1,446,569,414,000 |
I have an embedded Linux ARM system that is showing significantly less throughput than expected on both Ethernet and USB. I suspect the memory may be contributing. Is there a way to observe the memory bandwidth that is consumed while running a throughput test on the Ethernet or USB?
|
There is a memory bandwidth benchmark available in open source. It works for Intel & ARM under Linux or Windows Mobile CE.
It will give you raw performance for your memory as well as system performance with memory. But it won't give you a real-time bandwidth, so I don't know if it's a good answer to your question.
There's also a memtop tool out there, but it's more about usage than bandwidth. Perf tool can be handy in order to detect page fault.
| How can I observe memory bandwidth? |
1,446,569,414,000 |
I am compiling my own 3.14 kernel. I fear I may have left out some important networking feature to get DNS working.
I can't resolve domain names. I can ping my DNS server.
I can resolve using that DNS on other machines so I know it's not the server.
~ # cat /etc/resolv.conf
nameserver 192.168.13.5
~ # nslookup google.com
Server: 192.168.13.5
Address 1: 192.168.13.5
nslookup: can't resolve 'google.com'
~ # ping -c 1 google.com
ping: bad address 'google.com'
~ # ping -c 1 192.168.13.5
PING 192.168.13.5 (192.168.13.5): 56 data bytes
64 bytes from 192.168.13.5: seq=0 ttl=128 time=0.382 ms
--- 192.168.13.5 ping ststistics ---
1 packets transmitted, 1 packets recieved, 0% packet loss
reound-trip min/avg/max = 0.382/0.382/0.382 ms
Any ideas what I left out? here is my config: http://pastebin.com/vt4vGTgJ
EDIT:
If it's not the kernel, what could I be missing? I am using busybox, statically linked. there are no shared libraries in this system.
|
The problem is with busybox. I switched to a precompiled version and did not have issues. I need to look into compilation options with it. Thanks for your help.
https://gist.github.com/vsergeev/2391575:
There are known issues with DNS functionality in statically-linked glibc programs (like busybox in this case), because libnss must be dynamically loaded. Building a uClibc toolchain and linking busybox against that would resolve this.
| Busybox ping IP works, but hostname nslookup fails with "bad address" |
1,446,569,414,000 |
I have a bash script that makes a cURL request and writes the output to a file called resp.txt. I need to create an exclusive lock so that I can write to that file and not worry about multiple users running the script and editing the text file at the same time.
Here is the code that I expect to lock the file, perform the request, and write to the text file:
(
flock -e 200
curl 'someurl' -H 'someHeader' > resp.txt
) 200>/home/user/ITS/resp.txt
Is this the correct way to go about this? My actual script is a bit longer than this, but it seems to break when I add the flock syntax to the bash script.
If someone could explain how these file descriptors work and let me know if I am locking the file correctly that would be awesome!
|
This is not correct because when you do ( flock -e 200; ... ) 200> file, you are truncating the file file before you get the exclusive lock. I think that you should do:
touch resp.txt
(
flock -e 200
curl 'someurl' -H 'someHeader' > resp.txt
) 200< resp.txt
to place the lock on the file opened as read only.
Note. Some shells do not support file descriptors larger than 9. Moreover the hardcoded file descriptor may already be used. With advanced shells (bash, ksh93, zsh), the following can be done:
touch resp.txt
(
unset foo
exec {foo}< resp.txt
flock -e $foo
curl 'someurl' -H 'someHeader' > resp.txt
)
| How to use flock and file descriptors to lock a file and write to the locked file? |
1,446,569,414,000 |
I'm running Arch Linux on my netbook. My school have an open Access Point and we must sign into the network via a page which we are redirected to when we try to open whatever website if we are not connected.
It works on my Android Smartphone. It works on Windows. It should also works on Linux since my teacher is able to connect to it (he's running Ubuntu).
I connect to the access point with wifi-menu to generate a netctl profile. I am connected but I am not redirected to the login page, and when I type the address (taken from my phone) it don't find the server... I tried disabling IPv6, but nothing change...
|
What you describe is called a captive portal. They are typically used for authentication on Wi-Fi hotspots, but can be used to control wired network access as well.
There are several ways to implement a captive portal:
HTTP Redirection
In this case, DNS queries from unauthenticated clients are resolved as normal. However, when the browser makes a HTTP request to the resolved IP address, the request is intercepted by a firewall acting as a transparent proxy. The client HTTP request is forwarded to a server in the local network which issues a server-side redirect with a HTTP 302 Found status code, which will redirect the client to the captive portal.
DNS Redirection
In DNS based redirection the firewall ensures that only the DNS server(s) provided by DHCP may be used by authenticated clients. The firewall could also redirect any DNS queries from unauthenticated clients to the local DNS server. This DNS server will in turn return the IP address of the captive portal as a response to all DNS lookups made by unauthenticated clients.
IP Redirection
In redirection working on the IP layer a router performs Destination Network Address Translation (DNAT) to reroute packets originating from an captive hosts to the captive portal. In cases where the captive portal software runs on the router itself, the packets are directed to an internal interface instead. Packets headed from the captive portal to the host get in turn their source address rewritten so that they would appear to originate from the original destination.
When troubleshooting captive portal issues, the first step would be to identify what type of redirection is in use and at which point the redirection fails. The right tool for this job is a packet analyzer, such as Wireshark. Keep in mind though, that your school's IT policy might prohibit the use of packet sniffers on the local network as such tools could easily be used to invade the privacy of others on an unencrypted network.
You could also consult the tech support at your school. They would be aware of the captive portal configuration on the local Wi-Fi network, and especially if faculty members are using Linux they probably could help in pinpointing the source of the problem.
| How to sign into an open Wireless Network? |
1,446,569,414,000 |
I recently set up both Fedora 28 & Ubuntu 18.04 systems and would like to configure my primary user account on both so that I can run sudo commands without being prompted for a password.
How can I do this on the respective systems?
|
This is pretty trivial if you make use of the special Unix group called wheel on Fedora systems. You merely have to do the following:
Add your primary user to the wheel group
$ sudo gpasswd -a <primary account> wheel
Enable NOPASSWD for the %wheel group in /etc/sudoers
$ sudo visudo
Then comment out this line:
## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL
And uncomment this line:
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Save this file with Shift+Z+Z.
Logout and log back in
NOTE: This last step is mandatory so that your desktop and any corresponding top level shells are re-execed showing that your primary account is now a member of the wheel Unix group.
| Setting up passwordless sudo on Linux distributions |
1,446,569,414,000 |
Out of the two options to change permissions:
chmod 777 file.txt
chmod a+rwx file.txt
I am writing a document that details that users need to change the file permissions of a certain file. I want to detail it as the most common way of changing file permissions.
Currently is says:
- Set permissions on file.txt as per the example below:
- chmod 777 /tmp/file.txt
This is just an example, and won't change files to have full permissions for everyone.
|
Google gives:
1,030,000 results for 'chmod 777'
371,000 results for 'chmod a+rwx'
chmod 777 is about 3 times more popular.
That said, I prefer using long options in documentation and scripts, because they are self-documenting. If you are following up your instructions with "Run ls -l | grep file.txt and verify permissions", you may want to use chmod a+rwx because that's how ls will display the permissions.
| Which is more widely used: chmod 777 or chmod a+rwx [closed] |
1,446,569,414,000 |
Every now and then I need to reinstall Linux system (mainly Ubuntu based distros). The process of reinstalling every piece of software I need is really boring and time consuming. Is there any software that can help me out? For Windows there is Ninite, is there something else for Linux?
Edit: Thanks for all the answers! I went with Ansible and it is an amazing tool.
|
Ansible is an open-source software provisioning, configuration management, and application-deployment tool. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration
(From Wikipedia.) Homepage (Github).
There are several others in the same category. Reading about ansible should give you vocabulary to search for the others, and compare, if needed. Nix is a newer contender. Some say "more complex, but maybe just right.". chef is also on the scene.
Ansible example for hostname myhost, module apt (replace with yum or whatever):
ansible -K -i myhost, -m apt -a "name=tcpdump,tmux state=present" --become myhost
The list "tcpdump,tmux" can be extended with commas. (The fact, that the hostname myhost is twice in the command-line, because we are not using a fixed host inventory list, but an ad-hoc one, with the trailing comma.)
This only scratches the surface, Ansible has an extensive module collection.
| Is there any software that can help me reinstall software after fresh install |
1,446,569,414,000 |
Can a Linux command have capital letter(s)? I know it's supported but i want to be sure if it's a "problem" or considered "not a good thing"?
|
There's no restriction on command names on Unix. Any file can be a command. And a filename can be any sequence of one or more (up to a limit though) of characters other than ASCII NUL or ASCII /. zsh even lifts that limitation for functions where you can have any string as the function name.
A few notes though:
you'll have a hard time creating a command file called . or .. ;-).
avoid names that are already taken by standard commands or shell builtins or keywords (at least of the most common shells like bash, zsh, tcsh or ksh). In that regard upper case characters can help as they are generally not used by standard commands.
It's better to restrict to ASCII characters. Non ASCII characters are not expressed the same in the various character sets that are out there
while you're at it, restrict yourself to letters, digits, dash, dot and underscore. Anything else, while legal, may cause one problem or another with this or that tool (for instance, |, =, & and many others would need to be escaped in shells, if you use :, your command cannot be used as one's login shell...). You may even want to exclude . and - which are not allowed in function names in many shells, in case you want to allow users to wrap your command in a shell function.
Make the first character a letter. Again, not a strict requirement. But underscore is sometimes used for special things (like in zsh the functions from the completion systems start with _), and all-digit commands can be a problem in things like cmd>output.log. Files whose name starts with a dot will be hidden by things like ls or shell globbings and many file managers.
| Can a Linux command have capital letter(s)? |
1,446,569,414,000 |
I am using tune2fs, but it gives data in blocks, and I can't get the exact value of total size of the partition.
I have also used fdisk -l /dev/mmcblk0p1, but the size am getting from here is also a different value.
How can I find the exact partition size?
|
The command is:
blockdev --getsize64 /dev/mmcblk0p1
It gives the result in bytes, as a 64-bit integer. It queries the byte size of a block device, as the kernel see its size.
The reason, why fdisk -l /dev/mmcblk0p1 didn't work, was that fdisk does some total different thing: it reads in the partition table (= first sector) of the block device, and prints what it found. It doesn't check anything, only says what is in the partition table.
It doesn't even bother if the partition table is damaged, or the block device doesn't have one: it will print a warning that the checksum is not okay, but it still prints what is finds, even if the values are clearly non-sense.
This is what happened in your case: /dev/mmcblk0p1 does not have a partition table. As the name of the device shows, it is already the first partition of the physical disk /dev/mmcblk0. This disk contains a partition table, had you queried it with fdisk -l /dev/mmcblk0, it had worked (assuming it had an msdos partition table).
| How to find the size of an unmounted partition on Linux? |
1,446,569,414,000 |
I know how to list available packages from the repos and so on, but how can I find a list that matches up equivalent meta-packages, such as the build-essential.
Is there such a thing and if not what would be a sensible approach to find such close/similar matches?
|
The equivalent command is
yum groupinstall 'Development Tools'
| What would be the RHEL package corresponding to build-essential in Ubuntu? |
1,446,569,414,000 |
I was messing around with PowerShell this week and discovered that you are required to Sign your scripts so that they can be run. Is there any similar secure functionality in Linux that relates to preventing bash scripts from being run?
The only functionality similar to this, that I'm aware of is that of SSH requiring a certain key.
|
If you're locking users' ability to run scripts via sudo then you could use the digest functionality.
You can specify the hash of a script/executable in sudoers which will be verified by sudo before being executed. So although not the same as signing, it gives you a basic guarantee that the script has at least not been modified without sudoers also being modified.
If a command name is prefixed with a Digest_Spec, the command will only match successfully if it can be verified using the specified SHA-2 digest. This may be useful in situations where the user invoking sudo has write access to the command or its parent directory. The following digest formats are supported: sha224, sha256, sha384 and sha512. The string may be specified in either hex or base64 format (base64 is more compact). There are several utilities capable of generating SHA-2 digests in hex format such as openssl, shasum, sha224sum, sha256sum, sha384sum, sha512sum.
http://www.sudo.ws/man/1.8.13/sudoers.man.html
| Is there a shell that checks to make sure the code is signed? |
1,446,569,414,000 |
I accidentally overwrote my /dev/sda partition table with GParted (full story on AskUbuntu). Since I haven't rebooted yet and my filesystem is still perfectly usable, I was told I might be able to recover the partition table from in-kernel memory. Is that possible? If so, how do I recover it and restore it?
|
Yes, you can do this with the /sys filesystem.
/sys is a fake filesystem dynamically generated by the kernel & kernel drivers.
In this specific case you can go to /sys/block/sda and you will see a directory for each partition on the drive. There are 2 specific files in those folders you need, start and size. start contains the offset from the beginning of the drive, and size is the size of the partition. Just delete the partitions and recreate them with the exact same starts and sizes as found in /sys.
For example this is what my drive looks like:
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 133119 65536 83 Linux
/dev/sda2 * 133120 134340607 67103744 7 HPFS/NTFS/exFAT
/dev/sda3 134340608 974675967 420167680 8e Linux LVM
/dev/sda4 974675968 976773167 1048600 82 Linux swap / Solaris
And this is what I have in /sys/block/sda:
sda1/
start: 2048
size: 131072
sda2/
start: 133120
size: 134207488
sda3/
start: 134340608
size: 840335360
sda4/
start: 974675968
size: 2097200
I have tested this to verify information is accurate after modifying the partition table on a running system
| How to read the in-memory (kernel) partition table of /dev/sda? |
1,446,569,414,000 |
I got two NICs on the server side, eth0 ? 192.168.8.140 and eth1 ? 192.168.8.142. The client sends data to 192.168.8.142, and I expect iftop to show the traffic for eth1, but it does not. All networks go through eth0, so how can I test the two NICs?
Why does all the traffic go through eth0 instead of eth1? I expected I could get 1 Gbit/s per interface. What's wrong with my setup or configuration?
Server
ifconfig
eth0 Link encap:Ethernet HWaddr 00:00:00:19:26:B0
inet addr:192.168.8.140 Bcast:0.0.0.0 Mask:255.255.252.0
inet6 addr: 0000::0000:0000:fe19:26b0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:45287446 errors:0 dropped:123343 overruns:2989 frame:0
TX packets:3907747 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:66881007720 (62.2 GiB) TX bytes:261053436 (248.9 MiB)
Memory:f7e00000-f7efffff
eth1 Link encap:Ethernet HWaddr 00:00:00:19:26:B1
inet addr:192.168.8.142 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: 0000::0000:0000:fe19:26b1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:19358 errors:0 dropped:511 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1772275 (1.6 MiB) TX bytes:1068 (1.0 KiB)
Memory:f7c00000-f7cfffff
Server side
# Listen for incomming from 192.168.8.142
nc -v -v -n -k -l 192.168.8.142 8000 | pv > /dev/null
Listening on [192.168.8.142] (family 0, port 8000)
Connection from 192.168.8.135 58785 received!
Client
# Send to 192.168.8.142
time yes | pv |nc -s 192.168.8.135 -4 -v -v -n 192.168.8.142 8000 >/dev/null
Connection to 192.168.8.142 8000 port [tcp/*] succeeded!
Server side
$ iftop -i eth0
interface: eth0
IP address is: 192.168.8.140
TX: cumm: 6.34MB peak: 2.31Mb rates: 2.15Mb 2.18Mb 2.11Mb
RX: 2.55GB 955Mb 874Mb 892Mb 872Mb
TOTAL: 2.56GB 958Mb 877Mb 895Mb 874Mb
$ iftop -i eth1
interface: eth1
IP address is: 192.168.8.142
TX: cumm: 0B peak: 0b rates: 0b 0b 0b
RX: 4.51KB 3.49Kb 3.49Kb 2.93Kb 2.25Kb
TOTAL: 4.51KB 3.49Kb 3.49Kb 2.93Kb 2.25Kb
$ ip link show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:00:00:19:26:b0 brd ff:ff:ff:ff:ff:ff
$ ip link show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:00:00:19:26:b1 brd ff:ff:ff:ff:ff:ff
|
There are two possible design models for a TCP/IP network stack: a strong host model and a weak host model. You're expecting behavior that would match the strong host model. Linux is designed to use the weak host model. In general the weak host model is more common as it reduces the complexity of the routing code and thus might offer better performance. Otherwise the two host models are just different design principles: neither is inherently better than the other.
Basically, the weak host model means that outgoing traffic will be sent out the first interface listed in the routing table that matches the IP address of the destination (or selected gateway, if the destination is not reachable directly), without regard to the source IP address.
This is basically why it's generally inadvisable to use two separate physical interfaces if you need two IP addresses on the same network segment. Instead assign two IP addresses for one interface (IP aliases: e.g. eth1 = 192.168.8.142 and eth1:0 = 192.168.8.140). If you need more bandwidth than a single interface can provide, bond (or team, if applicable) two or more interfaces together, and then run both IPs on the bond/team.
By tweaking a number of sysctl settings and using the "advanced routing" functionality to set up independent routing tables for each NIC, it is possible to make Linux behave like a strong-host-model system. But that is a very special configuration, and I would recommend thinking twice before implementing it.
See the answers at Linux Source Routing, Strong End System Model / Strong Host Model? if you really need it.
| Why does Linux network traffic only go through eth0? |
1,446,569,414,000 |
If I try wget on a webpage, I am getting the page as html. Is it possible to retrieve only text of a file without associated html ? (This is required for me since some of the HTML pages contains c program is getting downloaded with html tags. I have to open it in browser and manually copy the text to make a .c file.)
|
wget will only retrieve the document. If the document is in HTML, what you want is the result of parsing the document.
You could, for example, use lynx -dump -nolist, if you have lynx around.
lynx is a lightweight, simple web browser, which has the -dump feature, used to output the result of the parsing process. -nolist avoids the list of links at the end, which will appear if the page has any hyperlinks.
As mentioned by @Thor, elinks can be used for this too, as it also has a -dump option (and has -no-references to omit the list of links). It may be especially useful if you walk across some site using -sigh- frames (MTFBWY).
Also, keep in mind that, unless the page is really just C code with HTML tags, you will need to check the result, just to make sure there's nothing more than C code there.
| How to get text of a page using wget without html? |
1,446,569,414,000 |
How to detect if isolcpus is activated and on which cpus, when for example you connect for the first time on a server.
Conditions:
not spawning any process to see where it will be migrated.
The use case is that isolcpus=1-7 on a 6 cores i7, seems to not activate isolcpus at boot, and i would like to know if its possible from /proc/, /sys or any kernel internals which can be read in userspace, to provide a clear status of activation of isolcpus and which cpu are concerned.
Or even read active setting of the scheduler which is the first concerned by isolcpus.
Consider the uptime is so big, that dmesg is no more displaying boot log to detect any error at startup.
Basic answer like "look at kernel cmd line" will not be accepted :)
|
What you look for should be found inside this virtual file:
/sys/devices/system/cpu/isolated
and the reverse in
/sys/devices/system/cpu/present // Thanks to John Zwinck
From drivers/base/cpu.c we see that the source displayed is the kernel variable cpu_isolated_map:
static ssize_t print_cpus_isolated(struct device *dev,
n = scnprintf(buf, len, "%*pbl\n", cpumask_pr_args(cpu_isolated_map));
...
static DEVICE_ATTR(isolated, 0444, print_cpus_isolated, NULL);
and cpu_isolated_map is exactly what gets set by kernel/sched/core.c at boot:
/* Setup the mask of cpus configured for isolated domains */
static int __init isolated_cpu_setup(char *str)
{
int ret;
alloc_bootmem_cpumask_var(&cpu_isolated_map);
ret = cpulist_parse(str, cpu_isolated_map);
if (ret) {
pr_err("sched: Error, all isolcpus= values must be between 0 and %d\n", nr_cpu_ids);
return 0;
}
return 1;
}
But as you observed, someone could have modified the affinity of processes, including daemon-spawned ones, cron, systemd and so on. If that happens, new processes will be spawned inheriting the modified affinity mask, not the one set by isolcpus.
So the above will give you isolcpus as you requested, but that might still not be helpful.
Supposing that you find out that isolcpus has been issued, but has not "taken", this unwanted behaviour could be derived by some process realizing that it is bound to only CPU=0, believing it is in monoprocessor mode by mistake, and helpfully attempting to "set things right" by resetting the affinity mask. If that was the case, you might try and isolate CPUS 0-5 instead of 1-6, and see whether this happens to work.
| how to detect if isolcpus is activated? |
1,446,569,414,000 |
I am learning Linux. I was surprised to see that the parameter order seems to matter when making a tarball.
tar -cfvz casual.tar.gz snapback.txt bucket.txt
gives the error:
tar: casual.tar.gz: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
But if I issue the command like this:
tar -cvzf casual.tar.gz snapback.txt bucket.txt
the tarball is created without errors
Can anyone explain to me why the parameter order matters in this example or where I can find that information to learn why myself? I tried it the way I did in my first example that received an error with the logic of putting the required parameters c and f first followed by my other parameters.
I want to completely absorb Linux, which includes understanding why things like this occur. Thanks in advance!
|
Whether the order matters depends on whether you start the options with a minus
$ tar -cfvz casual.tar.gz snapback.txt bucket.txt
tar: casual.tar.gz: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors
$ tar cfvz casual.tar.gz snapback.txt bucket.txt
snapback.txt
bucket.txt
This unusual behavior is documented in the man page
Options to GNU tar can be given in three different styles.
In traditional style
...
Any command line words that remain after all options has
been processed are treated as non-optional arguments: file or archive
member names.
...
tar cfv a.tar /etc
...
In UNIX or short-option style, each option letter is prefixed with a
single dash, as in other command line utilities. If an option takes
argument, the argument follows it, either as a separate command line
word, or immediately following the option.
...
tar -cvf a.tar /etc
...
In GNU or long-option style, each option begins with two dashes and
has a meaningful name
...
tar --create --file a.tar --verbose /etc
tar, which is short for "tape archive" has been around before the current conventions were decided on, so it keeps the different modes for compatibility.
So to "absorb Linux", I'd suggest a few starting lessons:
always read the man page
minor differences in syntax are sometimes important
the position of items - most commands require options to be the first thing after the command name
whether a minus is required (like tar, ps works differently depending on whether there is a minus at the start)
whether a space is optional, required, or must not be there (xargs -ifoo is different from xargs -i foo)
some things don't work the way you'd expect
To get the behavior you want in the usual style, put the output file name directly after the f or -f, e.g.
$ tar -cvzf casual.tar.gz snapback.txt bucket.txt
snapback.txt
bucket.txt
or:
$ tar -c -f casual.tar.gz -z -v snapback.txt bucket.txt
or you could use the less common but easier to read GNU long style:
$ tar --create --verbose -gzip --file casual.tar.gz snapback.txt bucket.txt
| Does parameter order matter with tar? [duplicate] |
1,446,569,414,000 |
The Linux README states that:
Linux has also been ported to itself. You can now run the kernel as a
userspace application - this is called UserMode Linux (UML).
Why would someone want to do this?
|
UML is very fast for development and much easier to debug. If for example you use KVM then you need to setup an environment that boots from network or be copying new kernels in the VM. With UML you just run the new kernel.
At one point I was testing some networking code on the kernel. This means that you get very very frequent kernel panics or other issues. Debugging this with UML is very easy.
Additionally, UML runs in places where there's no hardware assisted virtualization, so it was used even more before KVM became commonality.
| Why would someone want to run UserMode Linux (UML) |
1,446,569,414,000 |
I have a bash script that runs as long as the Linux machine is powered on. I start it as shown below:
( /mnt/apps/start.sh 2>&1 | tee /tmp/nginx/debug_log.log ) &
After it lauches, I can see the tee command in my ps output as shown below:
$ ps | grep tee
418 root 0:02 tee /tmp/nginx/debug_log.log
3557 root 0:00 grep tee
I have a function that monitors the size of the log that tee produces and kills the tee command when the log reaches a certain size:
monitor_debug_log_size() {
## Monitor the file size of the debug log to make sure it does not get too big
while true; do
cecho r "CHECKING DEBUG LOG SIZE... "
debugLogSizeBytes=$(stat -c%s "/tmp/nginx/debug_log.log")
cecho r "DEBUG LOG SIZE: $debugLogSizeBytes"
if [ $((debugLogSizeBytes)) -gt 100000 ]; then
cecho r "DEBUG LOG HAS GROWN TO LARGE... "
sleep 3
#rm -rf /tmp/nginx/debug_log.log 1>/dev/null 2>/dev/null
kill -9 `pgrep -f tee`
fi
sleep 30
done
}
To my surprise, killing the tee command also kills by start.sh instance. Why is this? How can I end the tee command but have my start.sh continue to run? Thanks.
|
When tee terminates, the command feeding it will continue to run, until it attempts to write more output. Then it will get a SIGPIPE (13 on most systems) for trying to write to a pipe with no readers.
If you modify your script to trap SIGPIPE and take some appropriate action (like, stop writing output), then you should be able to have it continue after tee is terminated.
Better yet, rather than killing tee at all, use logrotate with the copytruncate option for simplicity.
To quote logrotate(8):
copytruncate
Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one. It can be used when some program cannot be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some logging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place.
| How to terminate Linux tee command without killing application it is receiving from |
1,446,569,414,000 |
When I first got into using Slackware years ago I quickly learned to love JFS over ext3 or reiserfs given that it was reliable and if there was an unclean shutdown, its disk checking was very very quick. It's only been recently that I've found out that JFS is obscure to the point of being almost completely unmaintained by anyone.
I had no idea I was in such a minority. Why is it that this happened? Is it that filesystem technology has since advanced to the point that JFS now lacks any comparative advantages? Is it that ext3 was more interoperable with other operating systems? Was a particular other filesystem blessed by a particular vendor or the kernel developers?
Not so much a technical question as a historical one.
|
The first thing you have to get out of the way is the comparison to ext[234]. Replacing any of them is going to be like replacing NTFS in Windows. Possible, sure, but it will require a decision from the top to switch.
I know you're asking about keeping existing alternatives, not removal of other alternatives, but that privileged competition is sucking up most of the oxygen in the room. Until you get rid of the competition, marginal alternatives are going to have an exceptionally hard time getting any attention.
Since ext[234] aren't going away, JFS and its ilk are at a serious disadvantage from the start.
(This phenomenon is called the Tyranny of the Default.)
The second thing is that both JFS and XFS were contributed to Linux at about the same time, and they pretty much solve the same problems. Kernel geeks can argue about fine points between the two, but the fact is that those who have run into one of ext[234]'s limitations had two roughly equivalent solutions in XFS and JFS.
So why did XFS win? I'm not sure, but here are some observations:
Red Hat and SuSE endorsed it.
RHEL 7 uses XFS as its default filesystem, and it was an install-time option in RHEL 6. After RHEL 6 came out, Red Hat backported official XFS support to RHEL 5. XFS was available for RHEL 5 before that through the semi-official EPEL channel.
SuSE included XFS as an install-time option much earlier than Red Hat did, going back to SLES 8, released in 2002. It is not the current default, but it has been officially supported that whole time.
There are many other Linux distros, and RHEL and SuSE are not the most popular distros across the entire Linux space, but they are the big iron distros of choice. They're playing where the advantages of JFS and XFS matter most. These companies can't always wag the dog, but in questions involving big iron, they sometimes can.
XFS is from SGI, a company that is essentially gone now. Before they died, they formally gave over any rights they had in XFS so the Linux folk felt comfortable including it in the kernel.
IBM has also given over enough rights to JFS to make the Linux kernel maintainers comfortable, but we can't forget that they're an active, multibillion dollar company with thousands of patents. If IBM ever decided that their support of Linux no longer aligned with its interests, well, it could get ugly.
Sure, someone probably owns SGI's IP rights now and could make a fuss, but it probably wouldn't turn out any worse than the SCO debacle. IBM might even weigh in and help squash such a troll, since their interests do currently include supporting Linux.
The point being, XFS just feels more "free" to a lot of folk. It's less likely to pose some future IP problem. One of the problems with our current IP system is that copyright is tied to company lifetime, and companies don't usually die. Well, SGI did. That makes people feel better about treating SGI's contribution of XFS like that of any individual's contribution.
In any system involving network effects where you have two roughly equivalent alternatives — JFS and XFS in this case — you almost never get a 50/50 market share split.
Here, the network effects are training, compatibility, feature availability... These effects push the balance further and further toward the option that gained that early victory. Witness Windows vs. OS X, Linux vs. all-other-*ix, Ethernet vs. Token Ring...
| Why is JFS so obscure? |
1,446,569,414,000 |
need to search for something in entire content
I am trying:
find . | xargs grep word
I get error:
xargs: unterminated quote
How to achieve this?
|
xargs expects input in a format that no other command produces, so it's hard to use effectively. What's going wrong here is that you have a file whose name must be quoted on input to xargs (probably containing a ').
If your grep supports the -r or -R option for recursive search, use it.
grep -r word .
Otherwise, use the -exec primary of find. This is the usual way of achieving the same effect as xargs, except without constraints on file names. Reasonably recent versions of find allow you to group several files in a single call to the auxiliary command. Passing /dev/null to grep ensures that it will show the file name in front of each match, even if it happens to be called on a single file.
find . -type f -exec grep word /dev/null {} +
Older versions of find (on older systems or OpenBSD, or reduced utilities such as BusyBox) can only call the auxiliary command on one file at a time.
find . -type f -exec grep word /dev/null {} \;
Some versions of find and xargs have extensions that let them communicate correctly, using null characters to separate file names so that no quoting is required. These days, only OpenBSD has this feature without having -exec … {} +.
find . -type f -print0 | xargs -0 grep word /dev/null
| How to search for a word in entire content of a directory in linux |
1,446,569,414,000 |
Delete one word backward:
Ctrl + w
Delete one word forward:
?
Can anyone answer the above or do I need to add a command to stty as I can see by running the following command:
stty -a
that the action associated with Ctrl + w is defined there.
|
The key sequence is M-d in bash, i.e. Alt+D or Esc+D.
This invokes the kill-word Readline function:
kill-word (M-d)
Kill from point to the end of the current word, or if between
words, to the end of the next word. Word boundaries are the
same as those used by forward-word.
The above is taken from the bash manual.
| How to delete one word forward in bash? [duplicate] |
1,446,569,414,000 |
This is the output of the ls -li command on the VFAT filesystem.
% ls -li
合計 736
1207 drwxr-xr-x 3 root root 16384 3月 10 10:42 efi
1208 -rwxr-xr-x 1 root root 721720 3月 22 14:15 kernel.bin
1207 and 1208 are the inode numbers of the directory and file. However, the VFAT filesystem does not have the concept of inodes.
How does Linux assign inode numbers for files on a filesystem which does not have the inode notion?
|
tl;dr: For virtual, volatile, or inode-agnostic filesystems, inode numbers are usually generated from a monotonically incrementing, 32-bit counter when the inode is created. The rest of the inode (eg. permissions) is built from the equivalent data in the underlying filesystem, or is replaced with values set at mount time (eg. {uid,gid}=) if no such concept exists.
To answer the question in the title (ie. abstractly, how Linux allocates inode numbers for a filesystem that has no inode concept), it depends on the filesystem. For some virtual or inodeless filesystems, the inode number is drawn at instantiation time from the get_next_ino pool. This has a number of problems, though:
get_next_ino() uses 32-bit inode numbers even on a 64-bit kernel, due to legacy handling for 32-bit userland without _FILE_OFFSET_BITS=64;
get_next_ino() is just a globally incrementing counter used by multiple filesystems, so the risk of overflow is increased even further.
Problems like this are one of the reasons why I moved tmpfs away from get_next_ino-backed inodes last year.
For this reason, tmpfs in particular is an exception from most volatile or "inodeless" filesystem formats. Sockets, pipes, ramfs, and the like still use the get_next_ino pool as of 5.11.
As for your specific question about FAT filesystems: fs/fat/inode.c is where inode numbers are allocated for FAT vilesystems. If we look in there, we see fat_build_inode (source):
struct inode *fat_build_inode(struct super_block *sb,
struct msdos_dir_entry *de, loff_t i_pos)
{
struct inode *inode;
int err;
fat_lock_build_inode(MSDOS_SB(sb));
inode = fat_iget(sb, i_pos);
if (inode)
goto out;
inode = new_inode(sb);
if (!inode) {
inode = ERR_PTR(-ENOMEM);
goto out;
}
inode->i_ino = iunique(sb, MSDOS_ROOT_INO);
inode_set_iversion(inode, 1);
err = fat_fill_inode(inode, de);
if (err) {
iput(inode);
inode = ERR_PTR(err);
goto out;
}
fat_attach(inode, i_pos);
insert_inode_hash(inode);
out:
fat_unlock_build_inode(MSDOS_SB(sb));
return inode;
}
What this basically says is this:
Take the FAT inode creation lock for this superblock.
Check if the inode already exists at this position in the superblock. If so, unlock and return that inode.
Otherwise, create a new inode.
Get the inode number from iunique(sb, MSDOS_ROOT_INO) (more about that in a second).
Fill the rest of the inode from the equivalent FAT datastructures.
inode->i_ino = iunique(sb, MSDOS_ROOT_INO); is where the inode number is set here. iunique (source) is a fs-agnostic function that provides unique inode numbers for a given superblock. It does this by using a superblock + inode-based hash table, with a monotonically increasing counter:
ino_t iunique(struct super_block *sb, ino_t max_reserved)
{
static DEFINE_SPINLOCK(iunique_lock);
static unsigned int counter;
ino_t res;
rcu_read_lock();
spin_lock(&iunique_lock);
do {
if (counter <= max_reserved)
counter = max_reserved + 1;
res = counter++;
} while (!test_inode_iunique(sb, res)); /* nb: this checks the hash table */
spin_unlock(&iunique_lock);
rcu_read_unlock();
return res;
}
In that respect, it's pretty similar to the previously mentioned get_next_ino: just per-superblock instead of being global (like for pipes, sockets, or the like), and with some rudimentary hash-table based protection against collisions. It even inherits get_next_ino's behaviour using 32-bit inode numbers as a method to try and avoid EOVERFLOW on legacy applications, so there are likely going to be more filesystems which need 64-bit inode fixes (like my aforementioned inode64 implementation for tmpfs) in the future.
So to summarise:
Most virtual or inodeless filesystems use a monotonically incrementing counter for the inode number.
That counter isn't stable even for on-disk inodeless filesystems*. It may change without other changes to the filesystem on remount.
Most filesystems in this state (except for tmpfs with inode64) are still using 32-bit counters, so with heavy use it's entirely possible the counter may overflow and you may end up with duplicate inodes.
* ...although, to be fair, by contract this is true even for filesystems which do have an inode concept when i_generation changes -- it just is less likely to happen in practice since often the inode number is related to its physical position, or similar.
| How does Linux assign inode numbers on filesystems not based on inodes? |
1,446,569,414,000 |
I am trying to download the Linux version of Gdrive from GitHub using this command
wget https://docs.google.com/uc?id=0B3X9GlR6EmbnWksyTEtCM0VfaFE&export=download
It's getting stuck with this output.
[1] 869 pi@raspberrypi:~ $ Redirecting output to ‘wget-log.2’
|
There is a & in the URL (nothing special for URLs) it just so happens that this is a reserved character for the bash shell... which brings the current command to the background...
Try to either put your URL in "" or escape that & with a preceeding \
| redirecting output to 'wget-log.1' |
1,446,569,414,000 |
From this post it is shown that FS:[0x28] is a stack-canary. I'm generating that same code using GCC on this function,
void foo () {
char a[500] = {};
printf("%s", a);
}
Specifically, I'm getting this assembly..
0x000006b5 64488b042528. mov rax, qword fs:[0x28] ; [0x28:8]=0x1978 ; '(' ; "x\x19"
0x000006be 488945f8 mov qword [local_8h], rax
...stuff...
0x00000700 488b45f8 mov rax, qword [local_8h]
0x00000704 644833042528. xor rax, qword fs:[0x28]
0x0000070d 7405 je 0x714
0x0000070f e85cfeffff call sym.imp.__stack_chk_fail ; void __stack_chk_fail(void)
; CODE XREF from 0x0000070d (sym.foo)
0x00000714 c9 leave
0x00000715 c3 ret
What is setting the value of fs:[0x28]? The kernel, or is GCC throwing in the code? Can you show the code in the kernel, or compiled into the binary that sets fs:[0x28]? Is the canary regenerated -- on boot, or process spawn? Where is this documented?
|
It's easy to track this initialization, as for (almost) every process strace shows a very suspicious syscall during the very beginning of the process run:
arch_prctl(ARCH_SET_FS, 0x7fc189ed0740) = 0
That's what man 2 arch_prctl says:
ARCH_SET_FS
Set the 64-bit base for the FS register to addr.
Yay, looks like that's what we need. To find, who calls arch_prctl, let's look for a backtrace:
(gdb) catch syscall arch_prctl
Catchpoint 1 (syscall 'arch_prctl' [158])
(gdb) r
Starting program: <program path>
Catchpoint 1 (call to syscall arch_prctl), 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2
(gdb) bt
#0 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2
#1 0x00007ffff7ddd3e3 in dl_main () from /lib64/ld-linux-x86-64.so.2
#2 0x00007ffff7df04c0 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2
#3 0x00007ffff7dda028 in _dl_start () from /lib64/ld-linux-x86-64.so.2
#4 0x00007ffff7dd8fb8 in _start () from /lib64/ld-linux-x86-64.so.2
#5 0x0000000000000001 in ?? ()
#6 0x00007fffffffecef in ?? ()
#7 0x0000000000000000 in ?? ()
So, the FS segment base is set by the ld-linux, which is a part of glibc, during the program loading (if the program is statically linked, this code is embedded into the binary). This is where it all happens.
During the startup, the loader initializes TLS. This includes memory allocation and setting FS base value to point to the TLS beginning. This is done via arch_prctl syscall. After TLS initialization security_init function is called, which generates the value of the stack guard and writes it to the memory location, which fs:[0x28] points to:
Stack guard value initialization
Stack guard value write, more detailed
And 0x28 is the offset of the stack_guard field in the structure which is located at the TLS start.
| What sets fs:[0x28] (stack canary)? |
1,446,569,414,000 |
When I grep for a java process I get below output but it's limited to 4096 characters which results to actual process name(which is kafka.Kafka) not shown in grep output.
Is this a limitation of grep? Is there any way to print characters beyond 4096 limit?
ps -ef | grep java
java -Xmx6G -Xms6G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/x/kafka/data01/kafka-app-logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/x/kafka/data01/kafka-app-logs -Dlog4j.configuration=file:./../config/log4j.properties -cp :/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/argparse4j-0.5.0.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/connect-api-0.10.1.1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/connect-file-0.10.1.1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/connect-json-0.10.1.1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/connect-runtime-0.10.1.1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/guava-18.0.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/hk2-api-2.4.0-b34.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/hk2-locator-2.4.0-b34.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/hk2-utils-2.4.0-b34.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jackson-annotations-2.6.0.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jackson-core-2.6.3.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jackson-databind-2.6.3.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/javassist-3.18.2-GA.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/javax.annotation-api-1.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/javax.inject-1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/javax.inject-2.4.0-b34.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/javax.servlet-api-3.1.0.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/javax.ws.rs-api-2.0.1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-client-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-common-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-container-servlet-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-guava-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-media-jaxb-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jersey-server-2.22.2.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-http-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-io-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-security-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-server-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jetty-util-9.2.15.v20160210.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/jopt-simple-4.9.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/kafka_2.10-0.10.1.1.jar:/x/home/bmcuser/kafka-paypal/kafka_2.10-0.10.1.1/bin/../libs/kafka_2.10-0.10.1.1-so
|
This is not a limitation of grep, but of /proc/PID/cmdline (technically, a design decision, not a limitation). /proc/PID/cmdline contains the complete command line of the process, with main command and arguments separated by ASCII NUL, and the file ends in NUL too. So, grep will print the whole file content if there is a match. (ps -ef gets the content of this file as CMD).
The maximum length is hardcoded in the (Linux) kernel to the PAGE_SIZE:
static int proc_pid_cmdline(struct task_struct *task, char * buffer)
{
int res = 0;
unsigned int len;
struct mm_struct *mm = get_task_mm(task);
if (!mm)
goto out;
if (!mm->arg_end)
goto out_mm; /* Shh! No looking before we're done */
len = mm->arg_end - mm->arg_start;
if (len > PAGE_SIZE)
len = PAGE_SIZE;
hence 4096 bytes for such a system:
% getconf PAGE_SIZE
4096
Also, if you have multibyte character(s), the number of characters would be less than 4096, as you can imagine.
| ps only prints up to 4096 characters of any process's command line |
1,446,569,414,000 |
I have written a script that notifies me when a value is not within a given range. All values "out of range" are logged in a set of per day files.
Every line is timestamped in a proprietary reverse way:
yyyymmddHHMMSS
Now, I would like to refine the script, and receive notifications just when at least 60 minutes are passed since the last notification for the given out of range value.
I already solved the issue to print the logs in reverse ordered way with:
for i in $(ls -t /var/log/logfolder/*); do zcat $i|tac|grep \!\!\!|grep --color KEYFORVALUE; done
that results in:
...
20170817041001 - WARNING: KEYFORVALUE=252.36 is not between 225 and 245 (!!!)
20170817040001 - WARNING: KEYFORVALUE=254.35 is not between 225 and 245 (!!!)
20170817035001 - WARNING: KEYFORVALUE=254.55 is not between 225 and 245 (!!!)
20170817034001 - WARNING: KEYFORVALUE=254.58 is not between 225 and 245 (!!!)
20170817033001 - WARNING: KEYFORVALUE=255.32 is not between 225 and 245 (!!!)
20170817032001 - WARNING: KEYFORVALUE=254.99 is not between 225 and 245 (!!!)
20170817031001 - WARNING: KEYFORVALUE=255.95 is not between 225 and 245 (!!!)
20170817030001 - WARNING: KEYFORVALUE=255.43 is not between 225 and 245 (!!!)
20170817025001 - WARNING: KEYFORVALUE=255.26 is not between 225 and 245 (!!!)
20170817024001 - WARNING: KEYFORVALUE=255.42 is not between 225 and 245 (!!!)
20170817012001 - WARNING: KEYFORVALUE=252.04 is not between 225 and 245 (!!!)
...
Anyway, I'm stuck at calculating the number of seconds between two of those timestamps, for instance:
20170817040001
20160312000101
What should I do in order to calculate the time elapsed between two timestamps?
|
With the GNU implementation of date or compatible, this will give you the date in seconds (since the UNIX epoch)
date --date '2017-08-17 04:00:01' +%s # "1502938801"
And this will give you the date as a readable string from a number of seconds
date --date '@1502938801' # "17 Aug 2017 04:00:01"
So all that's needed is to convert your date/timestamp into a format that GNU date can understand, use maths to determine the difference, and output the result
datetime1=20170817040001
datetime2=20160312000101
# ksh93 string manipulation (also available in bash, zsh and
# recent versions of mksh)
datestamp1="${datetime1:0:4}-${datetime1:4:2}-${datetime1:6:2} ${datetime1:8:2}:${datetime1:10:2}:${datetime1:12:2}"
datestamp2="${datetime2:0:4}-${datetime2:4:2}-${datetime2:6:2} ${datetime2:8:2}:${datetime2:10:2}:${datetime2:12:2}"
# otherwise use sed
# datestamp1=$(echo "$datetime1" | sed -nE 's/(....)(..)(..)(..)(..)(..)/\1-\2-\3 \4:\5:\6/p')
# datestamp2=$(echo "$datetime2" | sed -nE 's/(....)(..)(..)(..)(..)(..)/\1-\2-\3 \4:\5:\6/p')
seconds1=$(date --date "$datestamp1" +%s)
seconds2=$(date --date "$datestamp2" +%s)
# standard sh integer arithmetics
delta=$((seconds1 - seconds2))
echo "$delta seconds" # "45197940 seconds"
We've not provided timezone information here so it assumes local timezone. Your values for the seconds from the datetime will probably be different to mine. (If your values are UTC then you can use date --utc.)
| Bash: calculate the time elapsed between two timestamps |
1,380,795,325,000 |
I read some articles/tutorials on 'ifconfig' command, most of them included a common statement -
"ifconfig is deprecated by ip command"
and suggested to learn ip command. But none of them explained how 'ip' command is more powerful than 'ifconfig'.
What is the difference between both of them?
|
ifconfig is from net-tools, which hasn't been able to fully keep up with the Linux network stack for a long time. It also still uses ioctl for network configuration, which is an ugly and less powerful way of interacting with the kernel.
A lot of changes in Linux networking code, and a lot of new features aren't accessible using net-tools: multipath routing, policy routing (see the RPDB). route allows you to do stupid things like adding multiple routes to the same destination, with the same metric.
Additionally:
ifconfig doesn't report the proper hardware address for some devices.
You can't configure ipip, sit, gre, l2tp, etc. in-kernel static tunnels.
You can't create tun or tap devices.
The way of adding multiple addresses to a given interface also has poor semantics.
You also can't configure the Linux traffic control system using net-tools either.
See also ifconfig sucks.
EDIT: Removed assertion about net-tools development ceasing that by now I forgot where I got for this post. net-tools' has been worked on since iproute2 was released, though it's mostly bug fixing and minor enhancements and features, like internationalization.
| Difference between 'ifconfig' and 'ip' commands |
1,380,795,325,000 |
I woke up this morning to a notification email with some rather disturbing system log entries.
Dec 2 04:27:01 yeono kernel: [459438.816058] ata2.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x6 frozen
Dec 2 04:27:01 yeono kernel: [459438.816071] ata2.00: failed command: WRITE FPDMA QUEUED
Dec 2 04:27:01 yeono kernel: [459438.816085] ata2.00: cmd 61/08:00:70:0d:ca/00:00:08:00:00/40 tag 0 ncq 4096 out
Dec 2 04:27:01 yeono kernel: [459438.816088] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout)
Dec 2 04:27:01 yeono kernel: [459438.816095] ata2.00: status: { DRDY }
(the above five lines were repeated a few times at a short interval)
Dec 2 04:27:01 yeono kernel: [459438.816181] ata2: hard resetting link
Dec 2 04:27:02 yeono kernel: [459439.920055] ata2: SATA link down (SStatus 0 SControl 300)
Dec 2 04:27:02 yeono kernel: [459439.932977] ata2: hard resetting link
Dec 2 04:27:09 yeono kernel: [459446.100050] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Dec 2 04:27:09 yeono kernel: [459446.314509] ata2.00: configured for UDMA/133
Dec 2 04:27:09 yeono kernel: [459446.328037] ata2.00: device reported invalid CHS sector 0
("reported invalid CHS sector 0" repeated a few times at a short interval)
I make full nightly backups of my entire system to an external (USB-connected) drive, and the above happened right in the middle of that backup run. (The backup starts at 04:00 through cron, and tonight's logged completion just before 04:56.) The backup process itself claims to have completed without any errors.
There are two internally connected SATA drives and two externally (USB) connected drives on my system; one of the external drives is currently dormant. I don't recall off the top of my head which physical SATA ports are used for which of the internal drives.
When googling I found the AskUbuntu question Is this drive failure or something else? which indicates that a very similar error occured after 8-10 GB had been copied to a drive, but the actual failure mode was different as the drive switched to a read-only state. The only real similarity is that I did add on the order of 7-8 GB of data to my main storage last night, which would have been backed up around the time that the error occured.
smartd is not reporting anything out of the ordinary on either of the internal drives. Unfortunately smartctl doesn't speak the language of the external backup drive's USB bridge, and simply complains about Unknown USB bridge [0x0bc2:0x3320 (0x100)]. Googling for that specific error was distinctly unhelpful.
My main data storage as well as the backup is on ZFS and zpool status reports 0 errors and no known data errors. Nevertheless I have initiated a full scrub on both the internal and external drives. It is currently slated to complete in about six hours for the internal drive (main storage pool) and 13-14 hours for the backup drive.
It seems that the next step should be to determine which drive was having trouble, and possibly replace it. The ata2.00 part probably tells me which drive was having problems, but how do I map that identifier to a physical drive?
|
I wrote one-liner based on Tobi Hahn answer.
For example, you want to know what device stands for ata3:
ata=3; ls -l /sys/block/sd* | grep $(grep $ata /sys/class/scsi_host/host*/unique_id | awk -F'/' '{print $5}')
It will produce something like this
lrwxrwxrwx 1 root root 0 Jan 15 15:30 /sys/block/sde -> ../devices/pci0000:00/0000:00:1f.5/host2/target2:0:0/2:0:0:0/block/sde
| Given a kernel ATA exception, how to determine which physical disk is affected? [duplicate] |
1,380,795,325,000 |
I have simulated network latency with netem and It's great. Now I want to simulate unplugged network cable or when server goes down. I need this to make testing of my application easier and I couldn't find anything on the web that would help me. My servers are virtual CentOS instances and they are on Virtualbox. I want to do this from a php web page.
|
Just bring the interface down. For example, with eth0:
ip link set eth0 down
To bring the interface back up:
ip link set eth0 up
| How to simulate unplugged network cable or down server? |
1,380,795,325,000 |
Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?
I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).
|
I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources.
But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low.
In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096.
Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I've used an rlimit_nofile of 300,000 for certain applications.
As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit.
See also some related questions:
https://serverfault.com/questions/356962/where-are-the-default-ulimit-values-set-linux-centos
https://serverfault.com/questions/773609/how-do-ulimit-settings-impact-linux
| Largest allowed maximum number of open files in Linux |
1,380,795,325,000 |
Suppose my non-root 32-bit app runs on a 64-bit system, all filesystems of which are mounted as read-only. The app creates an image of a 64-bit ELF in memory. But due to read-only filesystems it can't dump this image to a file to do an execve on. Is there still a supported way to launch a process from this image?
Note: the main problem here is to switch from 32-bit mode to 64-bit, not doing any potentially unreliable hacks. If this is solved, then the whole issue becomes trivial — just make a custom loader.
|
Yes, via memfd_create and fexecve:
int fd = memfd_create("foo", MFD_CLOEXEC);
// write your image to fd however you want
fexecve(fd, argv, envp);
| Can I exec an entirely new process without an executable file? |
1,380,795,325,000 |
I want to make "echo 1 > /sys/kernel/mm/ksm/run" persistent between boots. I know that I can edit /etc/sysctl.conf to make /proc filesystem changes persist, but this doesn't seem to work for /sys. How would I make this change survive reboots?
|
Most distros have some sort of an rc.local script that you could use. Check your distro as names and path may vary. Normally expect to look under /etc.
| Make changes to /sys persistent between boots |
1,380,795,325,000 |
Yesterday, one of our computers dropped to grub shell or honestly, I am unsure what shell it was when we turned on the machine.
It showed that it can't mount the root filesystem or something in this sense, because of inconsistencies.
I ran, I believe:
fsck -fy /dev/sda2
Rebooted and the problem was gone.
Here comes the question part:
I already have in her root's crontab:
@reboot /home/ruzena/Development/bash/fs-check.sh
while the script contains:
#!/bin/bash
touch /forcefsck
Thinking about it, I don't know, why I created script file for such a short command, but anyways...
Further, in the file:
/etc/default/rcS
I have defined:
FSCKFIX=yes
So I don't get it. How could the situation even arise?
What should I do to force the root filesystem check (and optionally a fix) at boot?
Or are these two things the maximum, that I can do?
OS: Linux Mint 18.x Cinnamon 64-bit.
fstab:
cat /etc/fstab | grep ext4
shows:
UUID=a121371e-eb12-43a0-a5ae-11af58ad09f4 / ext4 errors=remount-ro 0 1
grub:
fsck.mode=force
was already added to the grub configuration.
|
ext4 filesystem check during boot
Tested on OS: Linux Mint 18.x in a Virtual Machine
Basic information
/etc/fstab has the fsck order as the last (6th) column, for instance:
<file system> <mount point> <type> <options> <dump> <fsck>
UUID=2fbcf5e7-1234-abcd-88e8-a72d15580c99 / ext4 errors=remount-ro 0 1
FSCKFIX=yes variable in /etc/default/rcS
This will change the fsck to auto fix, but not force a fsck check.
From man rcS:
FSCKFIX
When the root and all other file systems are checked, fsck is
invoked with the -a option which means "autorepair". If there
are major inconsistencies then the fsck process will bail out.
The system will print a message asking the administrator to
repair the file system manually and will present a root shell
prompt (actually a sulogin prompt) on the console. Setting this
option to yes causes the fsck commands to be run with the -y
option instead of the -a option. This will tell fsck always to
repair the file systems without asking for permission.
From man tune2fs
If you are using journaling on your filesystem, your filesystem
will never be marked dirty, so it will not normally be checked.
Start with
Setting the following
FSCKFIX=yes
in the file
/etc/default/rcS
Check and note last time fs was checked:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
These two options did NOT work
Passing -F (force fsck on reboot) argument to shutdown:
shutdown -rF now
Nope; see: man shutdown.
Adding the /forcefsck empty file with:
touch /forcefsck
These scripts seem to use this:
/etc/init.d/checkfs.sh
/etc/init.d/checkroot.sh
did NOT work on reboot, but the file was deleted.
Verified by:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
sudo less /var/log/fsck/checkfs
sudo less /var/log/fsck/checkroot
These seem to be the logs for the init scripts.
I repeat, these two options did NOT work!
Both of these methods DID work
systemd-fsck kernel boot switches
Editing the main grub configuration file:
sudoedit /etc/default/grub
GRUB_CMDLINE_LINUX="fsck.mode=force"
sudo update-grub
sudo reboot
This did do a file system check as verified with:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
Note: This DID a check, but to force a fix too, you need to specify fsck.repair="preen", or fsck.repair="yes".
Using tune2fs to set the number of file system mounts before doing a fsck, man tune2fs
tune2fs' info is kept in the file system superblock
-c switch sets the number of times to mount the fs before checking the fs.
sudo tune2fs -c 1 /dev/sda1
Verify with:
sudo tune2fs -l /dev/sda1
This DID work as verified with:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
Summary
To force a fsck on every boot on Linux Mint 18.x, use either tune2fs, or fsck.mode=force, with optional fsck.repair=preen / fsck.repair=yes, the kernel command line switches.
| What should I do to force the root filesystem check (and optionally a fix) at boot? |
1,380,795,325,000 |
We can check the details of system V message queue with the help of ipcscommand. Is there any command to check POSIX message queue in Linux?
|
There is no command I know of but there exists a libc function call which can get the statistics:
man 3 mq_getattr
mq_getattr() returns an mq_attr structure in the buffer pointed by
attr. This structure is defined as:
struct mq_attr {
long mq_flags; /* Flags: 0 or O_NONBLOCK */
long mq_maxmsg; /* Max. # of messages on queue */
long mq_msgsize; /* Max. message size (bytes) */
long mq_curmsgs; /* # of messages currently in queue */
};
| linux command to check POSIX message queue |
1,380,795,325,000 |
I have a big csv file, which looks like this:
1,2,3,4,5,6,-99
1,2,3,4,5,6,-99
1,2,3,4,5,6,-99
1,2,3,4,5,6,25178
1,2,3,4,5,6,27986
1,2,3,4,5,6,-99
I want to select only the lines in which the 7th columns is equal to -99, so my output be:
1,2,3,4,5,6,-99
1,2,3,4,5,6,-99
1,2,3,4,5,6,-99
1,2,3,4,5,6,-99
I tried the following:
awk -F, '$7 == -99' input.txt > output.txt
awk -F, '{ if ($7 == -99) print $1,$2,$3,$4,$5,$6,$7 }' input.txt > output.txt
But both of them returned an empty output.txt. Can anyone tell me what I'm doing wrong?
Thanks.
|
The file that you run the script on has DOS line-endings. It may be that it was created on a Windows machine.
Use dos2unix to convert it to a Unix text file.
Alternatively, run it through tr:
tr -d '\r' <input.txt >input-unix.txt
Then use input-unix.txt with your otherwise correct awk code.
To modify the awk code instead of the input file:
awk -F, '$7 == "-99\r"' input.txt >output.txt
This takes the carriage-return at the end of the line into account.
Or,
awk -F, '$7 + 0 == -99' input.txt >output.txt
This forces the 7th column to be interpreted as a number, which "removes" the carriage-return.
Similarly,
awk -F, 'int($7) == -99' input.txt >output.txt
would also remove the \r.
| Using AWK to select rows with specific value in specific column |
1,380,795,325,000 |
What are the cons, for having a restrictive umask of 077? A lot of distros (I believe all, except Red Hat? ) have a default umask of 022, configured in /etc/profile. This seems way too insecure for a non-desktop system, which multiple users are accessing, and security is of concern.
On a related note, on Ubuntu, the users' home directories are also created with 755 permissions, and the installer states that this is for making it easier for users to share files. Assuming that users' are comfortable setting permissions by hand to make files shared, this is not a problem.
What other downsides are there?
|
022 makes things convenient. 077 makes things less convenient, but depending on the circumstances and usage profile, it might not be any less convenient than having to use sudo.
I would argue that, like sudo, the actual, measurable security benefit you gain from this is negligible compared to the level of pain you inflict on yourself and your users. As a consultant, I have been scorned for my views on sudo and challenged to break numerous sudo setups, and I have yet to take more than 15 seconds to do so. Your call.
Knowing about umask is good, but it's just a single Corn Flake in the "complete breakfast". Maybe you should be asking yourself "Before I go mucking with default configs, the consistency of which will need to be maintained across installs, and which will need to be documented and justified to people who aren't dim-witted, what's this gonna buy me?"
Umask is also a bash built-in that is settable by individual users in their shell initialization files (~/.bash*), so you're not really able to easily enforce the umask. It's just a default. In other words, it's not buying you much.
| Downsides of umask 077? |
1,380,795,325,000 |
btrfs supports reflinks, XFS supports reflinks (since 2017 I think?).
Are there any other filesystems that support it?
truncate -s 1G test.file;
cp --reflink=always test.file ref.test.file;
|
Support for reflinks is indicated using the remap_file_range operation, which is currently (6.7) supported by bcachefs, Btrfs, CIFS, NFS 4.2, OCFS2, overlayfs, and XFS.
| In Linux, which filesystems support reflinks? |
1,380,795,325,000 |
I am reading up on Linux processes from The Linux Documentation Project: https://www.tldp.org/LDP/tlk/kernel/processes.html
Processes are always making system calls and so may often need to wait. Even so, if a process executes until it waits then it still might use a disproportionate amount of CPU time and so Linux uses pre-emptive scheduling. In this scheme, each process is allowed to run for a small amount of time, 200ms, and, when this time has expired another process is selected to run and the original process is made to wait for a little while until it can run again. This small amount of time is known as a time-slice.
My question is, how is this time being kept track of? If the process is currently the only one occupying the CPU, then there is nothing actually checking if the time has expired, right?
I understand that processes jump to syscalls and those jump back to the scheduler, so it makes sense how processes can be “swapped” in that regards. But how is Linux capable of keeping track how much time a process has had on the CPU? Is it only possible via hardware timers?
|
The short answer is yes. All practical approaches to preemption will use some sort of CPU interrupt to jump back into privileged mode, i.e. the linux kernel scheduler.
If you look at your /proc/interrupts you'll find the interrupts used in the system, including timers.
Note that linux has several different types of schedulers, and the classic periodic timer style, is seldom used - from the Completely Fair Scheduler (CFS) documentation:
CFS uses nanosecond granularity accounting and does not rely on any jiffies or other HZ detail. Thus the CFS scheduler has no notion of “timeslices” in the way the previous scheduler had, and has no heuristics whatsoever.
Also, when a program issues a system call (Usually by a software interrupt - "trap"), the kernel is also able to preempt the calling program, this is especially evident with system calls waiting for data from other processes.
| How does Linux accomplish pre-emptive scheduling? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.