date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,294,624,203,000 |
When I execute a program without specifying the full path to the executable, and Bash must search the directories in $PATH to find the binary, it seems that Bash remembers the path in some sort of cache. For example, I installed a build of Subversion from source to /usr/local, then typed svnsync help at the Bash prompt. Bash located the binary /usr/local/bin/svnsync for "svnsync" and executed it. Then when I deleted the installation of Subversion in /usr/local and re-ran svnsync help, Bash responds:
bash: /usr/local/bin/svnsync: No such file or directory
But, when I start a new instance of Bash, it finds and executes /usr/bin/svnsync.
How do I clear the cache of paths to executables?
|
bash does cache the full path to a command. You can verify that the command you are trying to execute is hashed with the type command:
$ type svnsync
svnsync is hashed (/usr/local/bin/svnsync)
To clear the entire cache:
$ hash -r
Or just one entry:
$ hash -d svnsync
For additional information, consult help hash and man bash.
| How do I clear Bash's cache of paths to executables? |
1,294,624,203,000 |
I understand how to define include shared objects at linking/compile time. However, I still wonder how do executables look for the shared object (*.so libraries) at execution time.
For instance, my app a.out calls functions defined in the lib.so library. After compiling, I move lib.so to a new directory in my $HOME.
How can I tell a.out to go look for it there?
|
The shared library HOWTO explains most of the mechanisms involved, and the dynamic loader manual goes into more detail. Each unix variant has its own way, but most use the same executable format (ELF) and have similar dynamic linkers¹ (derived from Solaris). Below I'll summarize the common behavior with a focus on Linux; check your system's manuals for the complete story.
(Terminology note: the part of the system that loads shared libraries is often called “dynamic linker”, but sometimes “dynamic loader” to be more precise. “Dynamic linker” can also mean the tool that generates instructions for the dynamic loader when compiling a program, or the combination of the compile-time tool and the run-time loader. In this answer, “linker” refers to the run-time part.)
In a nutshell, when it's looking for a dynamic library (.so file) the linker tries:
directories listed in the LD_LIBRARY_PATH environment variable (DYLD_LIBRARY_PATH on OSX);
directories listed in the executable's rpath;
directories on the system search path, which (on Linux at least) consists of the entries in /etc/ld.so.conf plus /lib and /usr/lib.
The rpath is stored in the executable (it's the DT_RPATH or DT_RUNPATH dynamic attribute). It can contain absolute paths or paths starting with $ORIGIN to indicate a path relative to the location of the executable (e.g. if the executable is in /opt/myapp/bin and its rpath is $ORIGIN/../lib:$ORIGIN/../plugins then the dynamic linker will look in /opt/myapp/lib and /opt/myapp/plugins). The rpath is normally determined when the executable is compiled, with the -rpath option to ld, but you can change it afterwards with chrpath.
In the scenario you describe, if you're the developer or packager of the application and intend for it to be installed in a …/bin, …/lib structure, then link with -rpath='$ORIGIN/../lib'. If you're installing a pre-built binary on your system, either put the library in a directory on the search path (/usr/local/lib if you're the system administrator, otherwise a directory that you add to $LD_LIBRARY_PATH), or try chrpath.
| Where do executables look for shared objects at runtime? |
1,294,624,203,000 |
I have an executable for the perforce version control client (p4). I can't place it in /opt/local because I don't have root privileges. Is there a standard location where it needs to be placed under $HOME?
Does the File System Hierarchy have a convention that says that local executables/binaries need to be placed in $HOME/bin?
I couldn't find such a convention mentioned on the Wikipedia article for the FHS.
Also, if there indeed is a convention, would I have to explicitly include the path to the $HOME/bin directory or whatever the location of the bin directory is?
|
In general, if a non-system installed and maintained binary needs to be accessible system-wide to multiple users, it should be placed by an administrator into /usr/local/bin. There is a complete hierarchy under /usr/local that is generally used for locally compiled and installed software packages.
If you are the only user of a binary, installing into $HOME/bin or $HOME/.local/bin is the appropriate location since you can install it yourself and you will be the only consumer. If you compile a software package from source, it's also appropriate to create a partial or full local hierarchy in your $HOME or $HOME/.local directory. Using $HOME, the full local hierarchy would look like this.
$HOME/bin Local binaries
$HOME/etc Host-specific system configuration for local binaries
$HOME/games Local game binaries
$HOME/include Local C header files
$HOME/lib Local libraries
$HOME/lib64 Local 64-bit libraries
$HOME/man Local online manuals
$HOME/sbin Local system binaries
$HOME/share Local architecture-independent hierarchy
$HOME/src Local source code
When running configure, you should define your local hierarchy for installation by specifying $HOME as the prefix for the installation defaults.
./configure --prefix=$HOME
Now when make && make install are run, the compiled binaries, packages, man pages, and libraries will be installed into your $HOME local hierarchy. If you have not manually created a $HOME local hierarchy, make install will create the directories needed by the software package.
Once installed in $HOME/bin, you can either add $HOME/bin to your $PATH or call the binary using the absolute $PATH. Some distributions will include $HOME/bin in your $PATH by default. You can test this by either echo $PATH and seeing if $HOME/bin is there, or put the binary in $HOME/bin and executing which binaryname. If it comes back with $HOME/bin/binaryname, then it is in your $PATH by default.
| Where should a local user executable be placed (under $HOME)? |
1,294,624,203,000 |
I'm learning C#, so I made a little C# program that says Hello, World!, then compiled it with mono-csc and ran it with mono:
$ mono-csc Hello.cs
$ mono Hello.exe
Hello, World!
I noticed that when I hit TAB in bash, Hello.exe was marked executable. Indeed, it runs by just a shell loading the filename!
Hello.exe is not an ELF file with a funny file extension:
$ readelf -a Hello.exe
readelf: Error: Not an ELF file - it has the wrong magic bytes at the start
$ xxd Hello.exe | head -n1
00000000: 4d5a 9000 0300 0000 0400 0000 ffff 0000 MZ..............
MZ means it's a Microsoft Windows statically linked executable. Drop it onto a Windows box, and it will (should) run.
I have wine installed, but wine, being a compatibility layer for Windows apps, takes about 5x as long to run Hello.exe as mono and executing it directly do, so it's not wine that runs it.
I'm assuming there's some mono kernel module installed with mono that intercepts the exec syscall/s, or catches binaries that begin with 4D 5A, but lsmod | grep mono and friends return an error.
What's going on here, and how does the kernel know that this executable is special?
Just for proof it's not my shell working magic, I used the Crap Shell (aka sh) to run it and it still runs natively.
Here's the program in full, since a commenter was curious:
using System;
class Hello {
/// <summary>
/// The main entry point for the application
/// </summary>
[STAThread]
public static void Main(string[] args) {
System.Console.Write("Hello, World!\n");
}
}
|
This is binfmt_misc in action: it allows the kernel to be told how to run binaries it doesn't know about. Look at the contents of /proc/sys/fs/binfmt_misc; among the files you see there, one should explain how to run Mono binaries:
enabled
interpreter /usr/lib/binfmt-support/run-detectors
flags:
offset 0
magic 4d5a
(on a Debian system). This tells the kernel that binaries starting with MZ (4d5a) should be given to run-detectors. The latter figures out whether to use Mono or Wine to run the binary.
Binary types can be added, removed, enabled and disabled at any time; see the documentation above for details (the semantics are surprising, the virtual filesystem used here doesn't behave entirely like a standard filesystem). /proc/sys/fs/binfmt_misc/status gives the global status, and each binary "descriptor" shows its individual status. Another way of disabling binfmt_misc is to unload its kernel module, if it's built as a module; this also means it's possible to blacklist it to avoid it entirely.
This feature allows new binary types to be supported, such as MZ executables (which include Windows PE and PE+ binaries, but also DOS and OS/2 binaries!), Java JAR files... It also allows known binary types to be supported on new architectures, typically using Qemu; thus, with the appropriate libraries, you can transparently run ARM Linux binaries on an Intel processor!
Your question stemmed from cross-compilation, albeit in the .NET sense, and that brings up a caveat with binfmt_misc: some configuration scripts misbehave when you try to cross-compile on a system which can run the cross-compiled binaries. Typically, detecting cross-compilation involves building a binary and attempting to run it; if it runs, you're not cross-compiling, if it doesn't, you are (or your compiler's broken). autoconf scripts can usually be fixed in this case by explicitly specifying the build and host architectures, but sometimes you'll have to disable binfmt_misc temporarily...
| How is Mono magical? |
1,294,624,203,000 |
Before today, I've used the terminal to a limited extent of moving in and out of directories and changing the dates of files using the touch command. I had realised the full extent of the terminal after installing a fun script on Mac and having to chmod 755 the file to make it executable afterwards.
I'd like to know what /usr/local/bin is, though. /usr/, I assume, is the user of the computer. I'm not sure why /local/ is there, though. It obviously stands for the local computer, but since it's on the computer (or a server), would it really be necessary? Wouldn't /usr/bin be fine?
And what is /bin? Why is this area usually used for installing scripts onto the terminal?
|
/usr/local/bin is for programs that a normal user may run.
The /usr/local hierarchy is for use by the system administrator when installing software locally.
It needs to be safe from being overwritten when the system software is updated.
It may be used for programs and data that
are shareable amongst a group of hosts, but not found in /usr.
Locally installed software must be placed within /usr/local rather than /usr unless it is being installed to
replace or upgrade software in /usr.
This source helps explain the filesystem hierarchy standard on a deeper level.
You might find this article on the use and abuse of /usr/local/bin interesting as well.
| What is /usr/local/bin? |
1,294,624,203,000 |
I am trying to understanding the concept of special files on Linux. However, having a special file in /dev seems plain silly when its function could be implemented by a handful of lines in C to my knowledge.
Moreover you could use it in pretty much the same manner, i.e. piping into null instead of redirecting into /dev/null. Is there a specific reason for having it as a file? Doesn't making it a file cause many other problems like too many programs accessing the same file?
|
In addition to the performance benefits of using a character-special device, the primary benefit is modularity. /dev/null may be used in almost any context where a file is expected, not just in shell pipelines. Consider programs that accept files as command-line parameters.
# We don't care about log output.
$ frobify --log-file=/dev/null
# We are not interested in the compiled binary, just seeing if there are errors.
$ gcc foo.c -o /dev/null || echo "foo.c does not compile!".
# Easy way to force an empty list of exceptions.
$ start_firewall --exception_list=/dev/null
These are all cases where using a program as a source or sink would be extremely cumbersome. Even in the shell pipeline case, stdout and stderr may be redirected to files independently, something that is difficult to do with executables as sinks:
# Suppress errors, but print output.
$ grep foo * 2>/dev/null
| Why is /dev/null a file? Why isn't its function implemented as a simple program? |
1,294,624,203,000 |
Why do we use ./filename to execute a file in linux?
Why not just enter it like other commands gcc, ls etc...
|
In Linux, UNIX and related operating systems, . denotes the current directory. Since you want to run a file in your current directory and that directory is not in your $PATH, you need the ./ bit to tell the shell where the executable is. So, ./foo means run the executable called foo that is in this directory.
You can use type or which to get the full path of any commands found in your $PATH.
| Why do we use "./" (dot slash) to execute a file in Linux/UNIX? |
1,294,624,203,000 |
This may be a silly question, but I ask it still. If I have declared a shebang
#!/bin/bash
in the beginning of my_shell_script.sh, so do I always have to invoke this script using bash
[my@comp]$bash my_shell_script.sh
or can I use e.g.
[my@comp]$sh my_shell_script.sh
and my script determines the running shell using the shebang? Is it the same happening with ksh shell? I'm using AIX.
|
The shebang #! is an human readable instance of a magic number consisting of the byte string 0x23 0x21, which is used by the exec() family of functions to determine whether the file to be executed is a script or a binary. When the shebang is present, exec() will run the executable specified after the shebang instead.
Note that this means that if you invoke a script by specifying the interpreter on the command line, as is done in both cases given in the question, exec() will execute the interpreter specified on the command line, it won't even look at the script.
So, as others have noted, if you want exec() to invoke the interpreter specified on the shebang line, the script must have the executable bit set and invoked as ./my_shell_script.sh.
The behaviour is easy to demonstrate with the following script:
#!/bin/ksh
readlink /proc/$$/exe
Explanation:
#!/bin/ksh defines ksh to be the interpreter.
$$ holds the PID of the current process.
/proc/pid/exe is a symlink to the executable of the process (at least on Linux; on AIX, /proc/$$/object/a.out is a link to the executable).
readlink will output the value of the symbolic link.
Example:
Note: I'm demonstrating this on Ubuntu, where the default shell /bin/sh is a symlink to dash i.e. /bin/dash and /bin/ksh is a symlink to /etc/alternatives/ksh, which in turn is a symlink to /bin/pdksh.
$ chmod +x getshell.sh
$ ./getshell.sh
/bin/pdksh
$ bash getshell.sh
/bin/bash
$ sh getshell.sh
/bin/dash
| Does the shebang determine the shell which runs the script? |
1,294,624,203,000 |
I've created a bash script but when I try to execute it, I get
#!/bin/bash no such file or directory
I need to run the command: bash script.sh for it to work.
How can I fix this?
|
This kind of message is usually due to a buggy shebang line, either an extra carriage return at the end of the first line or a BOM at the beginning of it.
Run:
$ head -1 yourscript | od -c
and see how it ends.
This is wrong:
0000000 # ! / b i n / b a s h \r \n
This is wrong too:
0000000 357 273 277 # ! / b i n / b a s h \n
This is correct:
0000000 # ! / b i n / b a s h \n
Use dos2unix (or sed, tr, awk, perl, python…) to fix your script if this is the issue.
Here is one that will remove both of a BOM and tailing CRs:
sed -i '1s/^.*#//;s/\r$//' brokenScript
Note that the shell you are using to run the script will slightly affect the error messages that are displayed.
Here are three scripts just showing their name (echo $0) and having the following respective shebang lines:
correctScript:
0000000 # ! / b i n / b a s h \n
scriptWithBom:
0000000 357 273 277 # ! / b i n / b a s h \n
scriptWithCRLF:
0000000 # ! / b i n / b a s h \r \n
Under bash, running them will show these messages:
$ ./correctScript
./correctScript
$ ./scriptWithCRLF
bash: ./scriptWithCRLF: /bin/bash^M: bad interpreter: No such file or directory
$ ./scriptWithBom
./scriptWithBom: line 1: #!/bin/bash: No such file or directory
./scriptWithBom
Running the buggy ones by explicitely calling the interpreter allows the CRLF script to run without any issue:
$ bash ./scriptWithCRLF
./scriptWithCRLF
$ bash ./scriptWithBom
./scriptWithBom: line 1: #!/bin/bash: No such file or directory
./scriptWithBom
Here is the behavior observed under ksh:
$ ./scriptWithCRLF
ksh: ./scriptWithCRLF: not found [No such file or directory]
$ ./scriptWithBom
./scriptWithBom[1]: #!/bin/bash: not found [No such file or directory]
./scriptWithBom
and under dash:
$ ./scriptWithCRLF
dash: 2: ./scriptWithCRLF: not found
$ ./scriptWithBom
./scriptWithBom: 1: ./scriptWithBom: #!/bin/bash: not found
./scriptWithBom
| #!/bin/bash - no such file or directory |
1,294,624,203,000 |
If you create an executable file with the following contents, and run it, it will delete itself.
How does this work?
#!/bin/rm
|
The kernel interprets the line starting with #! and uses it to run the script, passing in the script's name; so this ends up running
/bin/rm scriptname
which deletes the script. (As Stéphane Chazelas points out, scriptname here is sufficient to find the script — if you specified a relative or absolute path, that's passed in as-is, otherwise whatever path was found in PATH is prepended, including possibly the emptry string if your PATH contains that and the script is in the current directory. You can play around with an echo script — #!/bin/echo — to see how this works.)
As hobbs pointed out, this means your script is actually an rm script, not a bash script — the latter would start with #!/bin/bash.
See How programs get run for details of how this works in Linux; the comments on that article give details for other platforms. #! is called a shebang, you'll find lots of information by searching for that term (thanks to Aaron for the suggestion). As jlp pointed out, you'll also find it referred to as "pound bang" or "hash bang" (# is commonly known as "pound" — in countries that don't use £ — or "hash", and ! as "bang"). Wikipedia has more info.
| Why does the following script delete itself? |
1,294,624,203,000 |
I made a backup to an NTFS drive, and well, this backup really proved necessary. However, the NTFS drive messed up permissions. I'd like to restore them to normal w/o manually fixing each and every file.
One problem is that suddenly all my text files gained execute permissions, which is wrong ofc. So I tried:
sudo chmod -R a-x folder\ with\ restored\ backup/
But it is wrong as it removes the x permission from directories as well which makes them unreadable.
What is the correct command in this case?
|
If you are fine with setting the execute permissions for everyone on all folders:
chmod -R -x+X -- 'folder with restored backup'
The -x removes execute permissions for all
The +X will add execute permissions for all, but only for directories.
See Stéphane Chazelas's answer for a solution
that uses find to really not touch folders, as requested.
| How to recursively remove execute permissions from files without touching folders? |
1,294,624,203,000 |
Is it possible to execute a script if there is no permission to read it? In root mode, I made a script and I want the other user to execute this script but not read it. I did chmod to forbid read and write but allow execute, however in user mode, I saw the message that says: permission denied.
|
The issue is that the script is not what is running, but the interpreter (bash, perl, python, etc.). And the interpreter needs to read the script. This is different from a "regular" program, like ls, in that the program is loaded directly into the kernel, as the interpreter would. Since the kernel itself is reading program file, it doesn't need to worry about read access. The interpreter needs to read the script file, as a normal file would need to be read.
| Can a script be executable but not readable? |
1,294,624,203,000 |
I've got a simple script:
#!/usr/bin/env ruby --verbose
# script.rb
puts "hi"
On my OSX box, it runs fine:
osx% ./script.rb
hi
However, on my linux box, it throws an error
linux% ./script.rb
/usr/bin/env: ruby --verbose: No such file or directory
If I run the shebang line manually, it works fine
linux% /usr/bin/env ruby --verbose ./script.rb
hi
But I can replicate the error if I pack ruby --verbose into a single argument to env
linux% /usr/bin/env "ruby --verbose" ./script.rb
/usr/bin/env: ruby --verbose: No such file or directory
So I think this is an issue with how env is interpreting the reset of the shebang line. I'm using GNU coreutils 8.4 env:
linux% /usr/bin/env --version
env (GNU coreutils) 8.4
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Richard Mlynarik and David MacKenzie.
This seems really odd. Is this a common issue with this version of env, or is there something else going on here that I don't know?
|
Looks like this is because Linux (unlike BSD) only passes a single argument to the shebang command (in this case env).
This has been extensively discussed on StackOverflow.
| Shebang line with `#!/usr/bin/env command --argument` fails on Linux |
1,294,624,203,000 |
I have currently a strange problem on debian (wheezy/amd64).
I have created a chroot to install a server (i can't give any more detail about it, sorry). Let's call its path /chr_path/.
To make things easy, I have initialized this chroot with a debootstrap (also wheezy/amd64).
All seemed to work well inside the chroot but when I started the installer script of my server I got :
zsh: Not found /some_path/perl (the installer includes a perl binary for some reasons)
Naturally, I checked the /some_path/ location and I found the "perl" binary. file in chroot environment returns :
/some_path/perl ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
The file exists, seems ok, has correct rights. I can use file, ls, vim on it but as soon as I try to execute it - ./perl for example - I get : zsh: Not found ./perl.
This situation is quite understandable for me. Moreover :
I can execute other basic binaries (/bin/ls,...) in the chroot without getting errors
I have the same problems for other binaries that came with the project
When I try to execute the binary from the main root (/chr_path/some_path/perl), it works.
I have tried to put one of the binaries with a copy of my ls. I checked that the access rights were the same but this didn't change anything (one was working, and the other wasn't)
|
When you fail to execute a file that depends on a “loader”, the error you get may refer to the loader rather than the file you're executing.
The loader of a dynamically-linked native executable is the part of the system that's responsible for loading dynamic libraries. It's something like /lib/ld.so or /lib/ld-linux.so.2, and should be an executable file.
The loader of a script is the program mentioned on the shebang line, e.g. /bin/sh for a script that begins with #!/bin/sh. (Bash and zsh give a message “bad interpreter” instead of “command not found” in this case.)
The error message is rather misleading in not indicating that the loader is the problem. Unfortunately, fixing this would be hard because the kernel interface only has room for reporting a numeric error code, not for also indicating that the error in fact concerns a different file. Some shells do the work themselves for scripts (reading the #! line on the script and re-working out the error condition), but none that I've seen attempt to do the same for native binaries.
ldd won't work on the binaries either because it works by setting some special environment variables and then running the program, letting the loader do the work. strace wouldn't provide any meaningful information either, since it wouldn't report more than what the kernel reports, and as we've seen the kernel can't report everything it knows.
This situation often arises when you try to run a binary for the right system (or family of systems) and superarchitecture but the wrong subarchitecture. Here you have ELF binaries on a system that expects ELF binaries, so the kernel loads them just fine. They are i386 binaries running on an x86_64 processor, so the instructions make sense and get the program to the point where it can look for its loader. But the program is a 32-bit program (as the file output indicates), looking for the 32-bit loader /lib/ld-linux.so.2, and you've presumably only installed the 64-bit loader /lib64/ld-linux-x86-64.so.2 in the chroot.
You need to install the 32-bit runtime system in the chroot: the loader, and all the libraries the programs need. From Debian wheezy onwards, if you want both i386 and x86_64 support, start with an amd64 installation and activate multiarch support: run dpkg --add-architecture i386 then apt-get update and apt-get install libc6:i386 zlib1g:i386 … (if you want to generate a list of the dependencies of Debian's perl package, to see what libraries are likely to be needed, you can use aptitude search -F %p '~Rdepends:^perl$ ~ri386'). You can pull in a collection of common libraries by installing the ia32-libs package (you need to enable multiarch support first). On Debian amd64 up to wheezy, the 32-bit loader is in the libc6-i386 package. You can install a bigger set of 32-bit libraries by installing ia32-libs.
| Getting "Not found" message when running a 32-bit binary on a 64-bit system |
1,294,624,203,000 |
On 32-bit Linux systems, invoking this
$ /lib/libc.so.6
and on 64-bit systems this
$ /lib/x86_64-linux-gnu/libc.so.6
in a shell, provides an output like this:
GNU C Library stable release version 2.10.1, by Roland McGrath et al.
Copyright (C) 2009 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Compiled by GNU CC version 4.4.0 20090506 (Red Hat 4.4.0-4).
Compiled on a Linux >>2.6.18-128.4.1.el5<< system on 2009-08-19.
Available extensions:
The C stubs add-on version 2.1.2.
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
BIND-8.2.3-T5B
RT using linux kernel aio
For bug reporting instructions, please see:
<http://www.gnu.org/software/libc/bugs.html>.
Why and how does this happen, and how is it possible to do the same in other shared libraries?
I looked at /usr/lib to find executables, and I found /usr/lib/libvlc.so.5.5.0. Running it led to a segmentation fault. :-/
|
That library has a main() function or equivalent entry point, and was compiled in such a way that it is useful both as an executable and as a shared object.
Here's one suggestion about how to do this, although it does not work for me.
Here's another in an answer to a similar question on S.O, which I'll shamelessly plagiarize, tweak, and add a bit of explanation.
First, source for our example library, test.c:
#include <stdio.h>
void sayHello (char *tag) {
printf("%s: Hello!\n", tag);
}
int main (int argc, char *argv[]) {
sayHello(argv[0]);
return 0;
}
Compile that:
gcc -fPIC -pie -o libtest.so test.c -Wl,-E
Here, we are compiling a shared library (-fPIC), but telling the linker that it's a regular executable (-pie), and to make its symbol table exportable (-Wl,-E), such that it can be usefully linked against.
And, although file will say it's a shared object, it does work as an executable:
> ./libtest.so
./libtest.so: Hello!
Now we need to see if it can really be dynamically linked. An example program, program.c:
#include <stdio.h>
extern void sayHello (char*);
int main (int argc, char *argv[]) {
puts("Test program.");
sayHello(argv[0]);
return 0;
}
Using extern saves us having to create a header. Now compile that:
gcc program.c -L. -ltest
Before we can execute it, we need to add the path of libtest.so for the dynamic loader:
export LD_LIBRARY_PATH=./
Now:
> ./a.out
Test program.
./a.out: Hello!
And ldd a.out will show the linkage to libtest.so.
Note that I doubt this is how glibc is actually compiled, since it is probably not as portable as glibc itself (see man gcc with regard to the -fPIC and -pie switches), but it demonstrates the basic mechanism. For the real details you'd have to look at the source makefile.
| Why and how are some shared libraries runnable, as though they are executables? |
1,294,624,203,000 |
With Bash's source it is possible to execute a script without an execution bit set. This is documented and expected behaviour, but isn't this against the use of an execution bit?
I know, that source doesn't create a subshell.
|
Bash is an interpreter; it accepts input and does whatever it wants to. It doesn't need to heed the executable bit. In fact, Bash is portable, and can run on operating systems and filesystems that don't have any concept of an executable bit.
What does care about the executable bit is the operating system kernel. When the Linux kernel performs an exec, for example, it checks that the filesystem is not mounted with a noexec option, it checks the executable bit of the program file, and enforces any requirements imposed by security modules (such as SELinux or AppArmor).
Note that the executable bit is a rather discretionary kind of control. On a Linux x86-64 system, for example, you can bypass the kernel's verification of the executable bit by explicitly invoking /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 as the interpreter:
cp /bin/ls /tmp/
chmod -x /tmp/ls
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 /tmp/ls
This is somewhat analogous to sourcing Bash source code in Bash, except that ld.so is the interpreter, and the code that it executes is machine code in ELF format.
| Why does Bash's source not need the execution bit? |
1,294,624,203,000 |
What are the different methods to run a non-nixos executable on NixOs? (For instance proprietary binaries.) I'd like to see also the manual methods.
|
Related answers
If you plan to package a binary and not just run it, you might like this other answer of mine: How to package my software in nix or write my own package derivation for nixpkgs
Short version
Quick and dirty: make sure steam-run is installed (strange name, it has nothing to do with steam), e.g. nix-shell -p steam-run, then:
$ steam-run ./your-binary
Since the creation of this answer, other alternatives to steam-run have been developed, see e.g. nix-ld that is now part of NixOs (that basically recreates the missing loaders in /lib… I hightly recommend you to configure it once for all, this way you won't need anymore to bother about running non-patched binaries, so you can use NPM etc without headaches):
(warning: since recent nix update, you only need programs.nix-ld.enable = true;, and variables can be configured using
programs.nix-ld.enable = true;
## If needed, you can add missing libraries here. nix-index-database is your friend to
## find the name of the package from the error message:
## https://github.com/nix-community/nix-index-database
programs.nix-ld.libraries = options.programs.nix-ld.libraries.default ++ (with pkgs; [ yourlibrary ]);
to automatically set the environment variables, but the code below should work also on legacy systems, so I'll let it here) Note that you might also need to reboot to make sure the environment variables are set correctly, not tested recently.
programs.nix-ld.enable = true;
environment.variables = {
NIX_LD_LIBRARY_PATH = with pkgs; lib.makeLibraryPath [
stdenv.cc.cc
openssl
xorg.libXcomposite
xorg.libXtst
xorg.libXrandr
xorg.libXext
xorg.libX11
xorg.libXfixes
libGL
libva
pipewire.lib
xorg.libxcb
xorg.libXdamage
xorg.libxshmfence
xorg.libXxf86vm
libelf
# Required
glib
gtk2
bzip2
# Without these it silently fails
xorg.libXinerama
xorg.libXcursor
xorg.libXrender
xorg.libXScrnSaver
xorg.libXi
xorg.libSM
xorg.libICE
gnome2.GConf
nspr
nss
cups
libcap
SDL2
libusb1
dbus-glib
ffmpeg
# Only libraries are needed from those two
libudev0-shim
# Verified games requirements
xorg.libXt
xorg.libXmu
libogg
libvorbis
SDL
SDL2_image
glew110
libidn
tbb
# Other things from runtime
flac
freeglut
libjpeg
libpng
libpng12
libsamplerate
libmikmod
libtheora
libtiff
pixman
speex
SDL_image
SDL_ttf
SDL_mixer
SDL2_ttf
SDL2_mixer
libappindicator-gtk2
libdbusmenu-gtk2
libindicator-gtk2
libcaca
libcanberra
libgcrypt
libvpx
librsvg
xorg.libXft
libvdpau
gnome2.pango
cairo
atk
gdk-pixbuf
fontconfig
freetype
dbus
alsaLib
expat
# Needed for electron
libdrm
mesa
libxkbcommon
];
NIX_LD = lib.fileContents "${pkgs.stdenv.cc}/nix-support/dynamic-linker";
};
There is also nix-alien / nix-autobahn that also automatically try to add the missing libraries. Finally you can use distrobox that provides you any distribution in a docker/podman container tightly integrated with the host… but from my experience it is more complicated to use that nix-ld that is truly transparent.
Here is a longer and more detailed explanation, together with various methods, often less dirty.
Long version
Here are several methods (the manual ones are mostly for educational purpose as most of the time writing a proper derivation is better). I'm not an expert at all, and I did this list also to learn nix, so if you have better methods, let me know!
So the main issue is that the executable call first a loader, and then needs some libraries to work, and nixos put both the loader and the libraries in /nix/store/.
This list gives all the methods I found so far. There are basically three "groups":
the full manual: interesting for educational purpose, and to understand what's going on, but that's all (don't use them in practice because nothing will prevent the derivations used to be garbage collected later)
the patched versions: these methods try to modify the executable (automatically when using the recommended method 4 with autoPatchelfHook) to make the point to the good library directly
the methods based on FHS, that basically fake a "normal linux" (more heavy to run than the patched version, so this should be avoided if possible).
I would recommend method 4 with autoPatchelfHook for a real, proper setup, and if you don't have time and just want to run a binary in one-line, you may be interested by the quick-and-dirty solution based on steam-run (method 7).
Method 1) Dirty manual method, no patch
You need to first find the loader with for example file:
$ file wolframscript
wolframscript: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=079684175aa38e3633b60544681b338c0e8831e0, stripped
Here the loader is /lib64/ld-linux-x86-64.so.2. To find the loader of nixos, you can do:
$ ls /nix/store/*glibc*/lib/ld-linux-x86-64.so.2
/nix/store/681354n3k44r8z90m35hm8945vsp95h1-glibc-2.27/lib/ld-linux-x86-64.so.2
You also need to find to find the libraries that your program require, for example with ldd or LD_DEBUG=libs:
$ ldd wolframscript
linux-vdso.so.1 (0x00007ffe8fff9000)
libpthread.so.0 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/libpthread.so.0 (0x00007f86aa321000)
librt.so.1 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/librt.so.1 (0x00007f86aa317000)
libdl.so.2 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/libdl.so.2 (0x00007f86aa312000)
libstdc++.so.6 => not found
libm.so.6 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/libm.so.6 (0x00007f86aa17c000)
libgcc_s.so.1 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/libgcc_s.so.1 (0x00007f86a9f66000)
libc.so.6 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/libc.so.6 (0x00007f86a9dae000)
/lib64/ld-linux-x86-64.so.2 => /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib64/ld-linux-x86-64.so.2 (0x00007f86aa344000)
Here, you see that most libraries are found except libstdc++.so.6. So let's find them! A first quick and dirty way to find them is to check if they are already present in your system:
$ find /nix/store -name libstdc++.so.6
/nix/store/12zhmzzhrwszdc8q3fwgifpwjkwi3mzc-gcc-7.3.0-lib/lib/libstdc++.so.6
In case the library is not already installed, you will certainly prefer to use the more involved nix-index program to search for these files in a much larger database (thanks hydra). For that, first install nix-index and generate the database (this is only needed the first time, but it can take a few minutes to run):
$ nix-index
(you can also use nix run github:mic92/nix-index-database yourlib to avoid recreating the database locally. See also nix-index-update from nix-alien to download the cache automatically from nix-index-database) then, to search for a library you can do something like (note that --top-level removes some entries):
$ nix-locate lib/libstdc++.so.6 --top-level
…
gcc-unwrapped.lib 0 s /nix/store/7fv9v6mnlkb4ddf9kz1snknbvbfbcbx0-gcc-10.3.0-lib/lib/libstdc++.so.6
…
Then you can install these libraries for this quick and dirty example (later we will see better solutions).
Good. Now, we just need to run the program with the LD_LIBRARY_PATH configured to point to this file (see also makeLibraryPath to generate this string in a derivation), and call the loader we determined at the first step on this file:
LD_LIBRARY_PATH=/nix/store/12zhmzzhrwszdc8q3fwgifpwjkwi3mzc-gcc-7.3.0-lib/lib/:$LD_LIBRARY_PATH /nix/store/681354n3k44r8z90m35hm8945vsp95h1-glibc-2.27/lib/ld-linux-x86-64.so.2 ./wolframscript
(make sure to use ./ before the script name, and to keep only the directory of the libraries. If you have several libraries, just use concat the path with colons)
Method 2) Dirty manual method, with patch
After installing (with nixenv -i or in your configuration.nix) patchelf, you can also directly modify the executable to pack the good loader and libraries. To change the loader just run:
patchelf --set-interpreter /nix/store/681354n3k44r8z90m35hm8945vsp95h1-glibc-2.27/lib/ld-linux-x86-64.so.2 wolframscript
and to check:
$ patchelf --print-interpreter wolframscript
/nix/store/681354n3k44r8z90m35hm8945vsp95h1-glibc-2.27/lib/ld-linux-x86-64.so.
and to change the path to the libraries hardcoded in the executable, first check what is the current rpath (empty for me):
$ patchelf --print-rpath wolframscript
and append them to the library path you determined before, eventually separated with colons:
$ patchelf --set-rpath /nix/store/12zhmzzhrwszdc8q3fwgifpwjkwi3mzc-gcc-7.3.0-lib/lib/ wolframscript
$ ./wolframscript
Method 3) Patch in a nix derivation
We can reproduce more or less the same thing in a nix derivation inspired by skypeforlinux
This example presents also an alternative, either you can use:
patchelf --set-interpreter ${glibc}/lib/ld-linux-x86-64.so.2 "$out/bin/wolframscript" || true
(which should be pretty clear once you understand the "manual" method), or
patchelf --set-interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)" "$out/bin/wolframscript" || true
This second method is a bit more subtle, but if you run:
$ nix-shell '<nixpkgs>' -A hello --run 'echo $NIX_CC/nix-support/dynamic-linker "->" $(cat $NIX_CC/nix-support/dynamic-linker)'
/nix/store/8zfm4i1aw4c3l5n6ay311ds6l8vd9983-gcc-wrapper-7.4.0/nix-support/dynamic-linker -> /nix/store/sw54ph775lw7b9g4hlfvpx6fmlvdy8qi-glibc-2.27/lib/ld-linux-x86-64.so.2
you will see that the file $NIX_CC/nix-support/dynamic-linker contains a path to the loader ld-linux-x86-64.so.2.
Put in derivation.nix, this is
{ stdenv, dpkg,glibc, gcc-unwrapped }:
let
# Please keep the version x.y.0.z and do not update to x.y.76.z because the
# source of the latter disappears much faster.
version = "12.0.0";
rpath = stdenv.lib.makeLibraryPath [
gcc-unwrapped
glibc
];
# What is it for?
# + ":${stdenv.cc.cc.lib}/lib64";
src = ./WolframScript_12.0.0_LINUX64_amd64.deb;
in stdenv.mkDerivation {
name = "wolframscript-${version}";
system = "x86_64-linux";
inherit src;
nativeBuildInputs = [
];
buildInputs = [ dpkg ];
unpackPhase = "true";
# Extract and copy executable in $out/bin
installPhase = ''
mkdir -p $out
dpkg -x $src $out
cp -av $out/opt/Wolfram/WolframScript/* $out
rm -rf $out/opt
'';
postFixup = ''
# Why does the following works?
patchelf --set-interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)" "$out/bin/wolframscript" || true
# or
# patchelf --set-interpreter ${glibc}/lib/ld-linux-x86-64.so.2 "$out/bin/wolframscript" || true
patchelf --set-rpath ${rpath} "$out/bin/wolframscript" || true
'';
meta = with stdenv.lib; {
description = "Wolframscript";
homepage = https://www.wolfram.com/wolframscript/;
license = licenses.unfree;
maintainers = with stdenv.lib.maintainers; [ ];
platforms = [ "x86_64-linux" ];
};
}
and in default.nix put:
{ pkgs ? import <nixpkgs> {} }:
pkgs.callPackage ./derivation.nix {}
Compile and run with
nix-build
result/bin/wolframscript
Method 4) Use autoPatchElf: simpler
All the previous methods need a bit of work (you need to find the executables, patch them...). NixOs did for us a special "hook" autoPatchelfHook that automatically patches everything for you! You just need to specify it in (native)BuildInputs, and nix does the magic.
{ stdenv, dpkg, glibc, gcc-unwrapped, autoPatchelfHook }:
let
# Please keep the version x.y.0.z and do not update to x.y.76.z because the
# source of the latter disappears much faster.
version = "12.0.0";
src = ./WolframScript_12.0.0_LINUX64_amd64.deb;
in stdenv.mkDerivation {
name = "wolframscript-${version}";
system = "x86_64-linux";
inherit src;
# Required for compilation
nativeBuildInputs = [
autoPatchelfHook # Automatically setup the loader, and do the magic
dpkg
];
# Required at running time
buildInputs = [
glibc
gcc-unwrapped
];
unpackPhase = "true";
# Extract and copy executable in $out/bin
installPhase = ''
mkdir -p $out
dpkg -x $src $out
cp -av $out/opt/Wolfram/WolframScript/* $out
rm -rf $out/opt
'';
meta = with stdenv.lib; {
description = "Wolframscript";
homepage = https://www.wolfram.com/wolframscript/;
license = licenses.mit;
maintainers = with stdenv.lib.maintainers; [ ];
platforms = [ "x86_64-linux" ];
};
}
Method 5) Use FHS to simulate a classic linux shell, and manually execute the files
Some sofware may be hard to package that way because they may heavily rely on the FHS file tree structure, or may check that the binary are unchanged. You can then also use buildFHSUserEnv to provide an FHS file structure (lightweight, using namespaces) for your application. Note that this method is heavier that the patch-based methods, and add significant startup time, so avoid it when possible
You can either just spawn a shell and then manually extract the archive and execute the file, or directly package your program for the FHS. Let's first see how to get a shell. Put in a file (say fhs-env.nix) the following:
let nixpkgs = import <nixpkgs> {};
in nixpkgs.buildFHSUserEnv {
name = "fhs";
targetPkgs = pkgs: [];
multiPkgs = pkgs: [ pkgs.dpkg ];
runScript = "bash";
}
and run:
nix-build fhs-env.nix
result/bin/fhs
You will then get a bash in a more standard-looking linux, and you can run commands to run your executable, like:
mkdir wolf_fhs/
dpkg -x WolframScript_12.0.0_LINUX64_amd64.deb wolf_fhs/
cd wolf_fhs/opt/Wolfram/WolframScript/bin/
./wolfram
If you need more libraries/programs as dependencies, just add them to multiPkgs (for all supported archs) or targetPkgs (for current arch only).
Bonus: you can also launch a fhs shell with a one line command, without creating a specifc file:
nix-build -E '(import <nixpkgs> {}).buildFHSUserEnv {name = "fhs";}' && ./result/bin/fhs
Method 6) Use FHS to simulate a classic linux shell, and pack the files inside
source: https://reflexivereflection.com/posts/2015-02-28-deb-installation-nixos.html (on archive.org)
Method 7) steam-run
With buildFHSUserEnv you can run lot's of softwares, but you will need to specify manually all the required libraries. If you want a quick solution and you don't have time to check precisely what are the required libraries, you may want to try steam-run (despite the name, it is not linked directly with steam, and just packs lots of libraries), which is like buildFHSUserEnv with lot's of common libraries preinstalled (some of them may be non-free like steamrt that packs some nvidia code, thanks simpson!). To use it, just install steam-run, and then:
steam-run ./wolframscript
or if you want a full shell:
steam-run bash
Note that you may need to add nixpkgs.config.allowUnfree = true; (or whitelist this specific package) if you want to install it with nixos-rebuild, and if you want to run/install it with nix-shell/nix-env you need to put { allowUnfree = true; } in ~/.config/nixpkgs/config.nix.
It is not easy to "overwrite" packages or libraries to nix-shell, but if you want to make a wrapper around your script, you can either manually create a wrapper script:
#!/usr/bin/env nix-shell
#!nix-shell -i bash -p steam-run
exec steam-run ./wolframscript "$@"
or directly write it in a nixos derivation :
{ stdenv, steam-run, writeScriptBin }:
let
src = ./opt/Wolfram/WolframScript/bin/wolframscript;
in writeScriptBin "wolf_wrapped_steam" ''
exec ${steam-run}/bin/steam-run ${src} "$@"
''
or if you start from the .deb (here I used makeWrapper instead):
{ stdenv, steam-run, dpkg, writeScriptBin, makeWrapper }:
stdenv.mkDerivation {
name = "wolframscript";
src = ./WolframScript_12.0.0_LINUX64_amd64.deb;
nativeBuildInputs = [
dpkg makeWrapper
];
unpackPhase = "true";
installPhase = ''
mkdir -p $out/bin
dpkg -x $src $out
cp -av $out/opt/Wolfram/WolframScript/bin/wolframscript $out/bin/.wolframscript-unwrapped
makeWrapper ${steam-run}/bin/steam-run $out/bin/wolframscript --add-flags $out/bin/.wolframscript-unwrapped
rm -rf $out/opt
'';
}
(if you are too tired to write the usual default.nix, you can run directly nix-build -E "with import <nixpkgs> {}; callPackage ./derivation.nix {}")
Method 8) Using nix-ld
If you do not want to spawn a sandbox as we did for steam-run (in sandboxes it's impossible to run setuid apps, sandboxes can't be nested, poor integration with the system packages included direnv), you can recreate the missing loader system-wide by putting in your configuration.nix:
programs.nix-ld.enable = true;
You can see that the file is now present:
$ ls /lib64/
ld-linux-x86-64.so.2
However it is still impossible to run binaries as the new ld-linux-x86-64.so.2 file only redirects to the loader in NIX_LD (this way multiple programs can use different loaders while being on the same system):
$ ./blender
cannot execute ./blender: NIX_LD or NIX_LD_x86_64-linux is not set
To locally create this environment variable, you can do something like:
$ cat shell.nix
with import <nixpkgs> {};
mkShell {
NIX_LD_LIBRARY_PATH = lib.makeLibraryPath [
stdenv.cc.cc
openssl
# ...
];
NIX_LD = lib.fileContents "${stdenv.cc}/nix-support/dynamic-linker";
}
$ nix-shell
[nix-shell:/tmp/blender-3.2.2-linux-x64]$ ./blender
or system-wide using:
(warning: since recent nix update, you only need programs.nix-ld.enable = true;, and variables can be configured using programs.nix-ld.libraries = with pkgs; []; instead of using environment variables, but the code below should work also on legacy systems, so I'll let it here)
environment.variables = {
NIX_LD_LIBRARY_PATH = lib.makeLibraryPath [
pkgs.stdenv.cc.cc
pkgs.openssl
# ...
];
NIX_LD = lib.fileContents "${pkgs.stdenv.cc}/nix-support/dynamic-linker";
};
Note that you need to restart your X11 session everytime you change this file or do:
$ cat /etc/profile | grep set-environment
. /nix/store/clwf7wsykkjdhbd0v8vb94pvg81lnsba-set-environment
$ . /nix/store/clwf7wsykkjdhbd0v8vb94pvg81lnsba-set-environment
Note that (contrary to steam-run) nix-ld does not come with any library by default (actually, this is not true anymore, see the note on the top of this file to use the more modern interface with a default list… but the list is quite small) but you can add your own or use tools to do that automatically, see below. You can also get inspired by the list of libraries that steam-run packs here: https://github.com/NixOS/nixpkgs/blob/master/pkgs/games/steam/fhsenv.nix Here is for example the file I'm using for now, it is enough to run blender/electron:
programs.nix-ld.enable = true;
environment.variables = {
NIX_LD_LIBRARY_PATH = with pkgs; lib.makeLibraryPath [
stdenv.cc.cc
openssl
xorg.libXcomposite
xorg.libXtst
xorg.libXrandr
xorg.libXext
xorg.libX11
xorg.libXfixes
libGL
libva
pipewire.lib
xorg.libxcb
xorg.libXdamage
xorg.libxshmfence
xorg.libXxf86vm
libelf
# Required
glib
gtk2
bzip2
# Without these it silently fails
xorg.libXinerama
xorg.libXcursor
xorg.libXrender
xorg.libXScrnSaver
xorg.libXi
xorg.libSM
xorg.libICE
gnome2.GConf
nspr
nss
cups
libcap
SDL2
libusb1
dbus-glib
ffmpeg
# Only libraries are needed from those two
libudev0-shim
# Verified games requirements
xorg.libXt
xorg.libXmu
libogg
libvorbis
SDL
SDL2_image
glew110
libidn
tbb
# Other things from runtime
flac
freeglut
libjpeg
libpng
libpng12
libsamplerate
libmikmod
libtheora
libtiff
pixman
speex
SDL_image
SDL_ttf
SDL_mixer
SDL2_ttf
SDL2_mixer
libappindicator-gtk2
libdbusmenu-gtk2
libindicator-gtk2
libcaca
libcanberra
libgcrypt
libvpx
librsvg
xorg.libXft
libvdpau
gnome2.pango
cairo
atk
gdk-pixbuf
fontconfig
freetype
dbus
alsaLib
expat
# Needed for electron
libdrm
mesa
libxkbcommon
];
NIX_LD = lib.fileContents "${pkgs.stdenv.cc}/nix-support/dynamic-linker";
};
You can also find the name of the libraries, see above nix-index. You can also use nix-alien-ld or nix-autobahn to automatically find and load the libraries for you. Note that if you don't have the right libraries you will get an error like
$ ./blender
./blender: error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory
You can see at once all the libraries that are not yet available using:
$ LD_LIBRARY_PATH=$NIX_LD_LIBRARY_PATH ldd turtl
libpangocairo-1.0.so.0 => /nix/store/n9h110ffps25rdkkim5k802p3p5w476m-pango-1.50.6/lib/libpangocairo-1.0.so.0 (0x00007f02feb83000)
libatk-1.0.so.0 => not found
…
Method 9) nix-alien
nix-alien automatically builds a FHS with the appropriates libraries. If you have flake enabled (otherwise just add replace nix run with nix --extra-experimental-features "nix-command flakes" run) you can simply run it this way (nix-alien is not yet packaged in 2022)
nix run github:thiagokokada/nix-alien -- yourprogram
It will then automatically find the library using nix-index, asking you some questions when it is not sure (this is cached).
Note that programs that rely on openGl need to use nixGl to run (this certainly apply to other methods here):
nix run --impure github:guibou/nixGL --override-input nixpkgs nixpkgs/nixos-21.11 -- nix run github:thiagokokada/nix-alien -- blender
Note that you may need to change the version of nixos-21.11 to ensure that the version of openGl matches your program.
Note that you can also see the automatically generated file (the path is given the first time the program is run):
$ cat /home/leo/.cache/nix-alien/87a5d119-f810-5222-9b47-4809257c60ec/fhs-env/default.nix
{ pkgs ? import <nixpkgs> { } }:
let
inherit (pkgs) buildFHSUserEnv;
in
buildFHSUserEnv {
name = "blender-fhs";
targetPkgs = p: with p; [
xorg.libX11.out
xorg.libXfixes.out
xorg.libXi.out
xorg.libXrender.out
xorg.libXxf86vm.out
xorg_sys_opengl.out
];
runScript = "/tmp/blender-3.2.2-linux-x64/blender";
}
See also the other version working with nix-ld and nix-autobahn.
Method 9) Using containers/Docker (heavier)
TODO
Note that the project distrobox allows you to simply create new containers tightly integrated with the host installing any distribution you want.
Method 10) Rely on flatpack/appimage
https://nixos.org/nixos/manual/index.html#module-services-flatpak
appimage-run : To test with, ex, musescore
Sources or examples
https://github.com/NixOS/nixpkgs/blob/5a9eaf02ae3c6403ce6f23d33ae569be3f9ce644/pkgs/applications/video/lightworks/default.nix
https://sandervanderburg.blogspot.com/2015/10/deploying-prebuilt-binary-software-with.html
https://github.com/NixOS/nixpkgs/blob/35c3396f41ec73c5e968a11f46d79a94db4042d7/pkgs/applications/networking/dropbox/default.nix
Also, for people wanting to get started in packaging, I wrote recently a similar tutorial here.
| Different methods to run a non-nixos executable on Nixos |
1,294,624,203,000 |
Is there a one-liner that will list all executables from $PATH in Bash?
|
This is not an answer, but it's showing binary, a command which you could run
compgen -c
(assuming bash)
Other useful commands
compgen -a # will list all the aliases you could run.
compgen -b # will list all the built-ins you could run.
compgen -k # will list all the keywords you could run.
compgen -A function # will list all the functions you could run.
compgen -A function -abck # will list all the above in one go.
| List all binaries from $PATH |
1,294,624,203,000 |
I recently learned that (at least on Fedora and Red Hat Enterprise Linux), executable programs that are compiled as Position Independent Executables (PIE) receive stronger address space randomization (ASLR) protection.
So: How do I test whether a particular executable was compiled as a Position Independent Executable, on Linux?
|
You can use the perl script contained in the hardening-check package, available in Fedora and Debian (as hardening-includes). Read this Debian wiki page for details on what compile flags are checked. It's Debian specific, but the theory applies to Red Hat as well.
Example:
$ hardening-check $(which sshd)
/usr/sbin/sshd:
Position Independent Executable: yes
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: yes
| How to test whether a Linux binary was compiled as position independent code? |
1,294,624,203,000 |
From man file,
EXAMPLES
$ file file.c file /dev/{wd0a,hda}
file.c: C program text
file: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked (uses shared libs), stripped
/dev/wd0a: block special (0/0)
/dev/hda: block special (3/0)
$ file -s /dev/wd0{b,d}
/dev/wd0b: data
/dev/wd0d: x86 boot sector
$ file -s /dev/hda{,1,2,3,4,5,6,7,8,9,10}
/dev/hda: x86 boot sector
/dev/hda1: Linux/i386 ext2 filesystem
/dev/hda2: x86 boot sector
/dev/hda3: x86 boot sector, extended partition table
/dev/hda4: Linux/i386 ext2 filesystem
/dev/hda5: Linux/i386 swap file
/dev/hda6: Linux/i386 swap file
/dev/hda7: Linux/i386 swap file
/dev/hda8: Linux/i386 swap file
/dev/hda9: empty
/dev/hda10: empty
$ file -i file.c file /dev/{wd0a,hda}
file.c: text/x-c
file: application/x-executable, dynamically linked (uses shared libs),
not stripped
/dev/hda: application/x-not-regular-file
/dev/wd0a: application/x-not-regular-file
What does executable stripping mean?
Why are some of the executables stripped while others are not?
|
If you compile an executable with gcc's -g flag, it contains debugging information. That means for each instruction there is information which line of the source code generated it, the name of the variables in the source code is retained and can be associated to the matching memory at runtime etc. Strip can remove this debugging information and other data included in the executable which is not necessary for execution in order to reduce the size of the executable.
| What are stripped and not-stripped executables in Unix? |
1,294,624,203,000 |
I have a Python script that need to be run with a particular python installation.
Is there a way to craft a shebang so that it runs with $FOO/bar/MyCustomPython?
|
The shebang line is very limited. Under many unix variants (including Linux), you can have only two words: a command and a single argument. There is also often a length limitation.
The general solution is to write a small shell wrapper. Name the Python script foo.py, and put the shell script next to foo.py and call it foo. This approach doesn't require any particular header on the Python script.
#!/bin/sh
exec "$FOO/bar/MyCustomPython" "$0.py" "$@"
Another tempting approach is to write a wrapper script like the one above, and put #!/path/to/wrapper/script as the shebang line on the Python script. However, most unices don't support chaining of shebang scripts, so this won't work.
If MyCustomPython was in the $PATH, you could use env to look it up:
#!/usr/bin/env MyCustomPython
import …
Yet another approach is to arrange for the script to be both a valid shell script (which loads the right Python interpreter on itself) and a valid script in the target language (here Python). This requires that you find a way to write such a dual-language script for your target language. In Perl, this is known as if $running_under_some_shell.
#!/bin/sh
eval 'exec "$FOO/bar/MyCustomPerl" -wS $0 ${1+"$@"}'
if $running_under_some_shell;
use …
Here's one way to achieve the same effect in Python. In the shell, "true" is the true utility, which ignores its arguments (two single-character strings : and ') and returns a true value. In Python, "true" is a string which is true when interpreted as a boolean, so this is an if instruction that's always true and executes a string literal.
#!/bin/sh
if "true" : '''\'
then
exec "$FOO/bar/MyCustomPython" "$0" "$@"
exit 127
fi
'''
import …
Rosetta code has such dual-language scripts in several other languages.
| How can I use environment variables in my shebang? |
1,294,624,203,000 |
I'm wondering about the way Linux manages shared libraries. (actually I'm talking about Maemo Fremantle, a Debian-based distro released in 2009 running on 256MB RAM).
Let's assume we have two executables linking to libQtCore.so.4 and using its symbols (using its classes and functions). For simplicity's sake let's call them a and b. We assume that both executables link to the same libraries.
First we launch a. The library has to be loaded. Is it loaded in whole or is it loaded to the memory only in the part that is required (as we don't use each class, only the code regarding the used classes is being loaded)?
Then we launch b. We assume that a is still running. b links to libQtCore.so.4 too and uses some of the classes that a uses, but also some that aren't used by a. Will the library be double loaded (separately for a and separately for b)? Or will they use the same object already in RAM. If b uses no new symbols and a is already running will the RAM used by shared libraries increase? (Or will the difference be insignificant)
|
NOTE: I'm going to assume that your machine has a memory mapping unit (MMU). There is a Linux version (µClinux) that doesn't require an MMU, and this answer doesn't apply there.
What is an MMU? It's hardware—part of the processor and/or memory controller. Understanding shared library linking doesn't require you to understand exactly how an MMU works, just that an MMU allows there to be a difference between logical memory addresses (the ones used by programs) and physical memory addresses (the ones actually present on the memory bus). Memory is broken down into pages, typically 4K in size on Linux. With 4k pages, logical addresses 0–4095 are page 0, logical addresses 4096–8191 are page 1, etc. The MMU maps those to physical pages of RAM, and each logical page can be typically mapped to 0 or 1 physical pages. A given physical page can correspond to multiple logical pages (this is how memory is shared: multiple logical pages correspond to the same physical page). Note this applies regardless of OS; it's a description of the hardware.
On process switch, the kernel changes the MMU page mappings, so that each process has its own space. Address 4096 in process 1000 can be (and usually is) completely different from address 4096 in process 1001.
Pretty much whenever you see an address, it is a logical address. User space programs hardly ever deal with physical addresses.
Now, there are multiple ways to build libraries as well. Let's say a program calls the function foo() in the library. The CPU doesn't know anything about symbols, or function calls really—it just knows how to jump to a logical address, and execute whatever code it finds there. There are a couple of ways it could do this (and similar things apply when a library accesses its own global data, etc.):
It could hard-code some logical address to call it at. This requires that the library always be loaded at the exact same logical address. If two libraries require the same address, dynamic linking fails and you can't launch the program. Libraries can require other libraries, so this basically requires every library on the system to have unique logical addresses. It's very fast, though, if it works. (This is how a.out did things, and the kind of set up that prelinking does, sort of).
It could hard-code a fake logical address, and tell the dynamic linker to edit in the proper one when loading the library. This costs a fair bit of time when loading the libraries, but after that it is very fast.
It could add a layer of indirection: use a CPU register to hold the logical address the library is loaded at, and then access everything as an offset from that register. This imposes a performance cost on each access.
Pretty much no one uses #1 anymore, at least not on general-purpose systems. Keeping that unique logical address list is impossible on 32-bit systems (there aren't enough to go around) and an administrative nightmare on 64-bit systems. Pre-linking sort of does this, though, on a per-system basis.
Whether #2 or #3 is used depends on if the library was built with GCC's -fPIC (position independent code) option. #2 is without, #3 is with. Generally, libraries are built with -fPIC, so #3 is what happens.
For more details, see the Ulrich Drepper's How to Write Shared Libraries (PDF).
So, finally, your question can be answered:
If the library is built with -fPIC (as it almost certainly should be), the vast majority of pages are exactly the same for every process that loads it. Your processes a and b may well load the library at different logical addresses, but those will point to the same physical pages: the memory will be shared. Further, the data in RAM exactly matches what is on disk, so it can be loaded only when needed by the page fault handler.
If the library is built without -fPIC, then it turns out that most pages of the library will need link edits, and will be different. Therefore, they must be separate physical pages (as they contain different data). That means they're not shared. The pages don't match what is on disk, so I wouldn't be surprised if the entire library is loaded. It can of course subsequently be swapped out to disk (in the swapfile).
You can examine this with the pmap tool, or directly by checking various files in /proc. For example, here is a (partial) output of pmap -x on two different newly-spawned bcs. Note that the addresses shown by pmap are, as typical, logical addresses:
pmap -x 14739
Address Kbytes RSS Dirty Mode Mapping
00007f81803ac000 244 176 0 r-x-- libreadline.so.6.2
00007f81803e9000 2048 0 0 ----- libreadline.so.6.2
00007f81805e9000 8 8 8 r---- libreadline.so.6.2
00007f81805eb000 24 24 24 rw--- libreadline.so.6.2
pmap -x 17739
Address Kbytes RSS Dirty Mode Mapping
00007f784dc77000 244 176 0 r-x-- libreadline.so.6.2
00007f784dcb4000 2048 0 0 ----- libreadline.so.6.2
00007f784deb4000 8 8 8 r---- libreadline.so.6.2
00007f784deb6000 24 24 24 rw--- libreadline.so.6.2
You can see that the library is loaded in multiple parts, and pmap -x gives you details on each separately. You'll notice that the logical addresses are different between the two processes; you'd reasonably expect them to be the same (since its the same program running, and computers are usually predictable like that), but there is a security feature called address space layout randomization that intentionally randomizes them.
You can see from the difference in size (Kbytes) and resident size (RSS) that the entire library segment has not been loaded. Finally, you can see that for the larger mappings, dirty is 0, meaning it corresponds exactly to what is on disk.
You can re-run with pmap -XX, and it'll show you—depending on the kernel version you're running, as -XX output varies by kernel version—that the first mapping has a Shared_Clean of 176, which exactly matches the RSS. Shared memory means the physical pages are shared between multiple processes, and since it matches the RSS, that means all of the library that is in memory is shared (look at the See Also below for further explanation of shared vs. private):
pmap -XX 17739
Address Perm Offset Device Inode Size Rss Pss Shared_Clean Shared_Dirty Private_Clean Private_Dirty Referenced Anonymous AnonHugePages Swap KernelPageSize MMUPageSize Locked VmFlagsMapping
7f784dc77000 r-xp 00000000 fd:00 1837043 244 176 19 176 0 0 0 176 0 0 0 4 4 0 rd ex mr mw me sd libreadline.so.6.2
7f784dcb4000 ---p 0003d000 fd:00 1837043 2048 0 0 0 0 0 0 0 0 0 0 4 4 0 mr mw me sd libreadline.so.6.2
7f784deb4000 r--p 0003d000 fd:00 1837043 8 8 8 0 0 0 8 8 8 0 0 4 4 0 rd mr mw me ac sd libreadline.so.6.2
7f784deb6000 rw-p 0003f000 fd:00 1837043 24 24 24 0 0 0 24 24 24 0 0 4 4 0 rd wr mr mw me ac sd libreadline.so.6.2
See Also
Getting information about a process' memory usage from /proc/pid/smaps for an explanation of the whole clean/dirty shared/private thing.
| Loading of shared libraries and RAM usage |
1,294,624,203,000 |
There are several ways to execute a script. The ones I know are:
/path/to/script # using the path (absolute or relative)
. script # using the . (dot)
source script # using the `source` command
Are there any other way? What are the differences between them? Are there situations that I must use one and not another?
|
Another way is by calling the interpreter and passing the path to the script to it:
/bin/sh /path/to/script
The dot and source are equivalent. (EDIT: no, they're not: as KeithB points out in a comment on another answer, "." only works in bash related shells, where "source" works in both bash and csh related shells.) It executes the script in-place (as if you copied and pasted the script right there). This means that any functions and non-local variables in the script remain. It also means if the script does a cd into a directory, you'll still be there when its done.
The other ways of running a script will run it in its own subshell. Variables in the script are not still alive when it's done. If the script changed directories, then it doesn't affect the calling environment.
/path/to/script and /bin/sh script are slightly different. Typically, a script has a "shebang" at the beginning that looks like this:
#! /bin/bash
This is the path to the script interpreter. If it specifies a different interpreter than you do when you execute it, then it may behave differently (or may not work at all).
For example, Perl scripts and Ruby scripts begin with (respectively):
#! /bin/perl
and
#! /bin/ruby
If you execute one of those scripts by running /bin/sh script, then they will not work at all.
Ubuntu actually doesn't use the bash shell, but a very similar one called dash. Scripts that require bash may work slightly wrong when called by doing /bin/sh script because you've just called a bash script using the dash interpreter.
Another small difference between calling the script directly and passing the script path to the interpreter is that the script must be marked executable to run it directly, but not to run it by passing the path to the interpreter.
Another minor variation: you can prefix any of these ways to execute a script with eval, so, you can have
eval sh script
eval script
eval . script
and so on. It doesn't actually change anything, but I thought I'd include it for thoroughness.
| Different ways to execute a shell script |
1,294,624,203,000 |
Why does sshd require an absolute path when restarting, e.g /usr/sbin/sshd rather than sshd
Are there any security implications?
P.S the error message:
# sshd
sshd re-exec requires execution with an absolute path
|
This is specific to OpenSSH from version 3.9 onwards.
For every new connection, sshd will re-execute itself, to ensure that all execute-time randomisations are re-generated for each new connection. In order for sshd to re-execute itself, it needs to know the full path to itself.
Here's a quote from the release notes for 3.9:
Make sshd(8) re-execute itself on accepting a new connection. This security measure ensures that all execute-time randomisations are
reapplied for each connection rather than once, for the master
process' lifetime. This includes mmap and malloc mappings, shared
library addressing, shared library mapping order, ProPolice and
StackGhost cookies on systems that support such things
In any case, it is usually better to restart a service using either its init script (e.g. /etc/init.d/sshd restart) or using service sshd restart. If nothing else, it will help you verify that the service will start properly after the next reboot...
(original answer, now irrelevant: My first guess would be that /usr/sbin isn't in your $PATH.)
| Why does sshd requires an absolute path? |
1,294,624,203,000 |
I know there are many differences between OSX and Linux, but what makes them so totally different, that makes them fundamentally incompatible?
|
The whole ABI is different, not just the binary format (Mach-O versus ELF) as sepp2k mentioned.
For example, while both Linux and Darwin/XNU (the kernel of OS X) use sc on PowerPC and int 0x80/sysenter/syscall on x86 for syscall entry, there's not much more in common from there on.
Darwin directs negative syscall numbers at the Mach microkernel and positive syscall numbers at the BSD monolithic kernel — see xnu/osfmk/mach/syscall_sw.h and xnu/bsd/kern/syscalls.master. Linux's syscall numbers vary by architecture — see linux/arch/powerpc/include/asm/unistd.h, linux/arch/x86/include/asm/unistd_32.h, and linux/arch/x86/include/asm/unistd_64.h — but are all nonnegative. So obviously syscall numbers, syscall arguments, and even which syscalls exist are different.
The standard C runtime libraries are different too; Darwin mostly inherits FreeBSD's libc, while Linux typically uses glibc (but there are alternatives, like eglibc and dietlibc and uclibc and Bionic).
Not to mention that the whole graphics stack is different; ignoring the whole Cocoa Objective-C libraries, GUI programs on OS X talk to WindowServer over Mach ports, while on Linux, GUI programs usually talk to the X server over UNIX domain sockets using the X11 protocol. Of course there are exceptions; you can run X on Darwin, and you can bypass X on Linux, but OS X applications definitely do not talk X.
Like Wine, if somebody put the work into
implementing a binary loader for Mach-O
trapping every XNU syscall and converting it to appropriate Linux syscalls
writing replacements for OS X libraries like CoreFoundation as needed
writing replacements for OS X services like WindowServer as needed
then running an OS X program "natively" on Linux could be possible. Years ago, Kyle Moffet did some work on the first item, creating a prototype binfmt_mach-o for Linux, but it was never completed, and I know of no other similar projects.
(In theory this is quite possible, and similar efforts have been done many times; in addition to Wine, Linux itself has support for running binaries from other UNIXes like HP-UX and Tru64, and the Glendix project aims to bring Plan 9 compatiblity to Linux.)
Somebody has put in the effort to implement a Mach-O binary loader and API translator for Linux!
shinh/maloader - GitHub takes the Wine-like approach of loading the binary and trapping/translating all the library calls in userspace. It completely ignores syscalls and all graphical-related libraries, but is enough to get many console programs working.
Darling builds upon maloader, adding libraries and other supporting runtime bits.
| What makes OSX programs not runnable on Linux? |
1,294,624,203,000 |
And now I am unable to chmod it back.. or use any of my other system programs. Luckily this is on a VM I've been toying with, but is there any way to resolve this? The system is Ubuntu Server 12.10.
I have attempted to restart into recovery mode, unfortunately now I am unable to boot into the system at all due to permissions not granting some programs after init-bottom availability to run- the system just hangs. This is what I see:
Begin: Running /scripts/init-bottom ... done
[ 37.062059] init: Failed to spawn friendly-recovery pre-start process: unable to execute: Permission denied
[ 37.084744] init: Failed to spawn friendly-recovery post-stop process: unable to execute: Permission denied
[ 37.101333] init: plymouth main process (220) killed by ABRT signal
After this the computer hangs.
|
Boot another clean OS, mount the file system and fix permissions.
As your broken file system lives in a VM, you should have your host system available and working. Mount your broken file system there and fix it.
In case of QEMU/KVM you can for example mount the file system using nbd.
| How to recover from a chmod -R 000 /bin? |
1,294,624,203,000 |
I'm stumped. I have a script in my /home directory which is executable:
[user@server ~]$ ll
total 4
-rwx------ 1 user user 2608 Jul 15 18:23 qa.sh
However, when I attempt to run it with sudo it says it can't find it:
[user@server ~]$ sudo ./qa.sh
[sudo] password for user:
sudo: unable to execute ./qa.sh: No such file or directory
This is on a fresh build. No changes have been made which would cause problems. In fact, the point of the script is to ensure that it is actually built according to our policies. Perhaps maybe it isn't and sudo is actually being broken during the build?
I should also note that I can run sudo with other commands in other directories.
EDIT: The script ( I didn't write it so don't /bin/bash me over it, please ;) )
#! /bin/bash
. /root/.bash_profile
customer=$1
if [ -z "$customer" ]; then
echo "Customer not provided. Exiting..."
exit 1
fi
space ()
{
echo
echo '###########################################################################'
echo '###########################################################################'
echo '###########################################################################'
echo
}
g=/bin/egrep
$g ^Listen /etc/ssh/sshd_config
$g ^PermitR /etc/ssh/sshd_config
$g ^LogL /etc/ssh/sshd_config
$g ^PubkeyA /etc/ssh/sshd_config
$g ^HostbasedA /etc/ssh/sshd_config
$g ^IgnoreR /etc/ssh/sshd_config
$g ^PermitE /etc/ssh/sshd_config
$g ^ClientA /etc/ssh/sshd_config
space
$g 'snyder|rsch|bream|shud|mweb|dam|kng|cdu|dpr|aro|pvya' /etc/passwd ; echo ; echo ; $g 'snyder|rsch|bream|shud|mweb|dam|kng|cdu|dpr|aro|pvya' /etc/shadow
space
$g 'dsu|scan' /etc/passwd ; echo ; echo ; $g 'dsu|scan' /etc/shadow
space
$g ${customer}admin /etc/passwd
space
chage -l ${customer}admin
space
$g 'urs|cust|dsu' /etc/sudoers
space
$g dsu /etc/security/access.conf
space
$g account /etc/pam.d/login
space
/sbin/ifconfig -a | $g addr | $g -v inet6
space
echo "10.153.156.0|10.153.174.160|10.120.80.0|10.152.80.0|10.153.193.0|172.18.1.0|10.153.173.0"
echo
$g '10.153.156.0|10.153.174.160|10.120.80.0|10.152.80.0|10.153.193.0|172.18.1.0|10.153.173.0' /etc/sysconfig/network-scripts/route-eth1
space
cat /etc/sysconfig/network-scripts/route-eth2
space
netstat -rn | tail -1
space
cat /etc/sysconfig/iptables
space
cat /etc/hosts
space
##file /usr/local/groundwork ; echo ; echo ; /sbin/service gdma status
##space
cat /etc/resolv.conf
space
HOSTNAME=`echo $HOSTNAME | awk -F. '{ print $1 }'`
nslookup ${HOSTNAME}
echo
echo
nslookup ${HOSTNAME}-mgt
echo
echo
nslookup ${HOSTNAME}-bkp
space
/sbin/service rhnsd status ; echo ; echo ; /sbin/chkconfig --list rhnsd ; echo ; echo ; yum update --security
space
/sbin/service osad status ; echo ; echo ; /sbin/chkconfig --list osad
space
/sbin/service sshd status ; echo ; echo ; /sbin/chkconfig --list sshd
space
/sbin/service snmpd status ; echo ; echo ; /sbin/chkconfig --list snmpd ; echo ; echo ; echo ; cat /etc/snmp/snmpd.conf
space
df -h
space
cat /proc/cpuinfo | $g ^processor
space
free -g
space
if [ -f /etc/rsyslog.conf ]; then
tail -3 /etc/rsyslog.conf
else
echo "This system is not running rsyslog."
fi
rm -f $0
|
This usually happens when the shebang (#!) line in your script is broken.
The shebang is what tells the kernel the file needs to be executed using an interpreter. When run without sudo, the message is a little more meaningful. But with sudo you get the message you got.
For example:
$ cat test.sh
#!/bin/foo
echo bar
$ ./test.sh
bash: ./test.sh: /bin/foo: bad interpreter: No such file or directory
$ bash test.sh
bar
$ sudo ./test.sh
sudo: unable to execute ./test.sh: No such file or directory
$ sudo bash ./test.sh
bar
The bad interpreter message clearly indicates that it's the shebang which is faulty.
| sudo: unable to execute ./script.sh: no such file or directory |
1,294,624,203,000 |
The example I have is Minecraft. When running Bukkit on Linux I can remove or update the .jar files in the /plugins folder and simply run the 'reload' command.
In Windows, I have to take the whole server process down because it will complain that the .jar file is currently in use when I try to remove or replace it.
This is awesome to me, but why does it happen?
What is Linux doing differently here?
|
Linux deletes a file completely differently than the way Windows does. First, a brief explanation on how files are managed in the *unix native file systems.
The file is kept on the disk in the multilevel structure called i-node. Each i-node has an unique number on the single filesystem. The i-node structure keeps different information about a file, like its size, data blocks allocated for the file etc., but for the sake of this answer the most important data element is a link counter. The directories are the files that keep records about the files. Each record has the i-node number it refers to, the file name length and the file name itself. This scheme allows one to have 'pointers', i.e. 'links' to the same file in different places with different names. The link counter of the i-node actually keeps the number of links that refer to this i-node.
What happens when some process opens the file? First the open() function searches for the file record. Then it checks if the in-memory i-node structure for this i-node already exists. This may happen if some application already had this file opened. Otherwise, the system initializes a new in-memory i-node structure. Then the system increases the in-memory i-node structure open counter and returns to the application its file descriptor.
The Linux library call to delete a file is called unlink. This function removes the file record from a directory and decrements the i-node's link counter. If the system found that an in-memory i-node structure exists and its open counter is not zero then this call returns the control to the application. Otherwise it checks if the link-counter became zero and if it does then the system frees all blocks allocated for the i-node and the i-node itself and returns to the application.
What happens that an application closes a file? The function close() decrements the open counter and checks its value. If the value is non-zero the function returns to the application. Otherwise it checks if the i-node link counter is zero. If it is zero, it frees all blocks of the file and the i-node before returning to the application.
This mechanism allows you to "delete" a file while it is opened. At the same time the application that opened a file still has access to the data in the file. So, JRE, in your example, still keeps its version of file opened while there is another, updated version on the disk.
More over, this feature allows you to update the glibc(libc) - the core library of all applications - in your system without interrupting its normal operation.
Windows
20 years ago we did not know any other file system than FAT under DOS. This file system has a different structure and management principles. These principles do not allow you to delete a file when it is opened, so the DOS and lately Windows has to deny any delete requests on a file that is open. Probably NTFS would allow the same behavior as *nix file systems but Microsoft decided to maintain the habitual behavior of the file deletion.
This is the answer. Not short, but now you have the idea.
Edit:
A good read on sources of Win32 mess: https://web.archive.org/web/20190218083407/https://blogs.msdn.microsoft.com/oldnewthing/20040607-00/?p=38993
Credits to @Jon
| What is Linux doing differently that allows me to remove/replace files where Windows would complain the file is currently in use? |
1,294,624,203,000 |
We have two systems with similar hardware (main point being the processor, let us say a standard intel core 2 duo).
One is running (insert your linux distro here: Ubuntu will be used henceforth), and the other is running let's say Mac OS X.
One compiles an equivalent program, Let us say something like:
int main()
{
int cat = 33;
int dog = 5*cat;
return dog;
}
The code is extremely simple, because I don't want to consider the implications of shared libraries yet.
When compiled on the respective systems. Is not the main difference between the output a matter of ELF vs Mach-O? If one were to strip each binary of the formatting, leaving a flat binary, wouldn't the disassembled machine instructions be the same? (with perhaps a few differences depending on the compilers habits/tendencies).
If one were to develop a program to repackage the flat binary produced from our Ubuntu system, in the Mach-O formatting, would it run in the Mac OS X system? Then, if one only had the compiled binary of the supposed program above, and one had this mystical tool for repackaging flat binaries, would simple programs be able to run on the Mac OS X system?
Now let us take it a bit further.
We now have a program with source such as:
#include <stdio.h>
int main()
{
printf("I like tortoises, but not porpoises");
return 0;
}
Assuming this program is compiled and statically linked, would our magical program still be able to repackage the raw binary in the Mach-O format and have it work on mac os X? Seeing as it would not need to rely on any other binaries, (for which the mac system would not have in this case)
And now for the final level;
What if we used this supposed program to convert all of the necessary shared libraries to the Mach-O format, and then instead compiled the program above with dynamic linking. Would the program still succeed to run?
That should be it for now, obviously each step of absurdity relies on the previous base, to even make sense. so If the very first pillar gets destroyed, I doubt there would be much merit to the remaining tiers.
I definitely would not even go as far as to think of this with programs with GUI's in mind. Windowing systems would likely be a whole other headache. I am only considering command line programs at this stage.
Now, I invite the world to correct me,and tell me everything that is wrong with my absurd line of thinking.
|
You forget one crucial thing, namely that your program will have to interact with the operating system to do anything interesting.
The conventions are different between Linux and OS X so the same binary cannot run as-is without essentially having a chunk of operating system dependent code to be able to interact with it. Many of these things are provided through libraries, which you then need to link in, and that means your program needs to be linkable, and linking is also different between the two systems.
And so it goes on and on. What on the surface sounds like doing the same thing is very different in the actual details.
| Binary compatibility between Mac OS X and Linux |
1,294,624,203,000 |
I want to launch the wine executable (Version 2.12), but I get the following error ($=shell prompt):
$ wine
bash: /usr/bin/wine: No such file or directory
$ /usr/bin/wine
bash: /usr/bin/wine: No such file or directory
$ cd /usr/bin
$ ./wine
bash: ./wine: No such file or directory
However, the file is there:
$ which wine
/usr/bin/wine
The executable definitely is there and no dead symlink:
$ stat /usr/bin/wine
File: /usr/bin/wine
Size: 9712 Blocks: 24 IO Block: 4096 regular file
Device: 802h/2050d Inode: 415789 Links: 1
Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2017-07-13 13:53:00.000000000 +0200
Modify: 2017-07-08 03:42:45.000000000 +0200
Change: 2017-07-13 13:53:00.817346043 +0200
Birth: -
It is a 32-bit ELF:
$ file /usr/bin/wine
/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32,
BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped
I can get the dynamic section of the executable:
$ readelf -d /usr/bin/wine
Dynamic section at offset 0x1efc contains 27 entries:
Tag Type Name/Value
0x00000001 (NEEDED) Shared library: [libwine.so.1]
0x00000001 (NEEDED) Shared library: [libpthread.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
0x0000001d (RUNPATH) Library runpath: [$ORIGIN/../lib32]
0x0000000c (INIT) 0x7c000854
0x0000000d (FINI) 0x7c000e54
[more addresses without file names]
However, I cannot list the shared object dependencies using ldd:
$ ldd /usr/bin/wine
/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory
strace shows:
execve("/usr/bin/wine", ["wine"], 0x7fff20dc8730 /* 66 vars */) = -1 ENOENT (No such file or directory)
fstat(2, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 4), ...}) = 0
write(2, "strace: exec: No such file or di"..., 40strace: exec: No such file or directory
) = 40
getpid() = 23783
exit_group(1) = ?
+++ exited with 1 +++
Edited to add suggestion by @jww: The problem appears to happen before dynamically linked libraries are requested, because no ld debug messages are generated:
$ LD_DEBUG=all wine
bash: /usr/bin/wine: No such file or directory
Even when only printing the possible values of LD_DEBUG, the error occurs instead
$ LD_DEBUG=help wine
bash: /usr/bin/wine: No such file or directory
Edited to add suggestion of @Raman Sailopal: The problem seems to lie within the executable, as copying the contents of /usr/bin/wine to another already created file produces the same error
root:bin # cp cat testcmd
root:bin # testcmd --help
Usage: testcmd [OPTION]... [FILE]...
Concatenate FILE(s) to standard output.
[rest of cat help page]
root:bin # dd if=wine of=testcmd
18+1 records in
18+1 records out
9712 bytes (9.7 kB, 9.5 KiB) copied, 0.000404061 s, 24.0 MB/s
root:bin # testcmd
bash: /usr/bin/testcmd: No such file or directory
What is the problem or what can I do to find out which file or directory is missing?
uname -a:
Linux laptop 4.11.3-1-ARCH #1 SMP PREEMPT Sun May 28 10:40:17 CEST 2017 x86_64 GNU/Linux
|
This:
$ file /usr/bin/wine
/usr/bin/wine: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32,
BuildID[sha1]=eaf6de433d8196e746c95d352e0258fe2b65ae24, stripped
Combined with this:
$ ldd /usr/bin/wine
/usr/bin/ldd: line 117: /usr/bin/wine: No such file or directory
Strongly suggests that the system does not have the /lib/ld-linux.so.2 ELF interpreter. That is, this 64-bit system does not have any 32-bit compatibility libraries installed. Thus, @user1334609's answer is essentially correct.
| Linux executable fails with "File not found" even though the file is there and in PATH |
1,294,624,203,000 |
For example:
$ node
-bash: /usr/local/bin/node: No such file or directory
$ foo
-bash: foo: command not found
What's the difference? In both cases, node and foo are invalid commands, but it seems like Unix just can't find the node binary? When uninstalling a program, e.g. node, is there a way to clean this up so that I get
$ node
-bash: node: command not found
EDIT:
Results from type command:
$ type node
node is hashed (/usr/local/bin/node)
$ type foo
-bash: type: foo: not found
|
That's because bash remembered your command location, store it in a hash table.
After you uninstalled node, the hash table isn't cleared, bash still thinks node is at /usr/local/bin/node, skipping the PATH lookup, and calling /usr/local/bin/node directly, using execve(). Since when node isn't there anymore, execve() returns ENOENT error, means no such file or directory, bash reported that error to you.
In bash, you can remove an entry from hash table:
hash -d node
or remove the entire hash table (works in all POSIX shell):
hash -r
| Difference between "command not found" and "no such file or directory"? |
1,294,624,203,000 |
I'm writing a program that will test programs written by students. I'm afraid that I can't trust them and I need to make sure that it won't end up badly for the computer running it.
I was thinking about making some crash test user with limited access to system resources and run programs as that user, but from what I have found on the net so far, making a virtual system would be the safest option...
Can someone help me with choosing the right approach? Security is a big concern for me. On the other hand, I don't want a solution that is overkill and waste much time trying to learn something I don't really need.
|
Virtual machine can give you highest security without reboot, but lowest performance.
Another option, for even higher security than a virtual machine: boot a "live" CD/DVD/pendrive without access to the hard drive (temporarily disable the HDD in BIOS; if you can't, at least do not mount the drive / unmount it, if mounted automatically - but this is much less secure)
A docker container is a bit less secure alternative to a full virtual machine. Probably the crucial difference (in terms of security) between these two is that systems running in docker actually use the kernel of your host system.
There are programs such as isolate that will create a special, secured environment - this is generally called a sandbox - those are typically chroot-based, with additional supervision - find one that fits you.
A simple chroot will be least secure (esp. in regards to executing programs), though maybe a little faster, but... You'll need to build/copy a whole separate root tree and use bind mounts for /dev etc. (see Note 1 below!). So in general, this approach cannot be recommended, especially if you can use a more secure, and often easier to set up, sandbox environment.
Note 0: To the aspect of a "special user", like the nobody account: This gives hardly any security, much less than even a simple chroot. A nobody user can still access files and programs that have read and execute permissions set for other. You can test it with su -s /bin/sh -c 'some command' nobody. And if you have any configuration/history/cache file accessible to anybody (by a mistake or minor security hole), a program running with nobody's permissions can access it, grep for confidential data (like "pass=" etc.) and in many ways send it over the net or whatever.
Note 1: As Gilles pointed in a comment below, a simple chroot environment will give very little security against exploits aiming at privilege escalation. A sole chroot makes sense security-wise, only if the environment is minimal, consisting of security-confirmed programs only (but there still remains the risk of exploiting potential kernel-level vulnerabilities), and all the untrusted programs running in the chroot are running as a user who does not run any process outside the chroot. What chroot does prevent against (with the restrictions mentioned here), is direct system penetration without privilege escalation. However, as Gilles noted in another comment, even that level of security might get circumvented, allowing a program to break out of the chroot.
| Execution of possibly harmful program on Linux |
1,294,624,203,000 |
I have found the term "LSB executable" or "LSB shared object" in the output of the file command in Linux. For example:
$ file /bin/ls
/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=4637713da6cd9aa30d1528471c930f88a39045ff, stripped
What does "LSB" mean in this context?
|
“LSB” here stands for “least-significant byte” (first), as opposed to “MSB”, “most-significant byte”. It means that the binary is little-endian.
file determines this from the sixth byte of the ELF header.
| What does "LSB" mean when referring to executable files in the output of /bin/file? |
1,294,624,203,000 |
How can I execute a command making it believe that is on a different date than system one?
For instance, if I have this script:
#!/usr/bin/env bash
date +"%B %d, %Y"
It prints the actual date: march 13, 2014
But I would like it to print a different date, on the future or past, without changing the system date.
If I wasn't clear enough, I want a command line tool like this Windows GUI tool.
I do not want to use a different script (it was just an example).
I do not want to set a cronjob.
I do not want to change my general system date.
Only change the date that apply to the command to run.
|
Using the libfaketime software could be a solution
sudo apt-get install faketime
faketime '2006-09-20' wine Example.exe
| How to trick a particular command into thinking it is a different date? |
1,294,624,203,000 |
I installed Debian in VirtualBox (for various experiments which usually broke my system) and tried to launch the VirtualBox guest addon script. I logged in as root and tried to launch autorun.sh, but I got «Permission denied». ls -l shows that the script have an executable rights.
Sorry, that I can't copy the output -- VirtualBox absolutely have no use without the addon, as neither a shared directory, nor a shared clipboard works. But just for you to be sure, I copied the rights by hands:
#ls -l ./autorun.sh
-r-xr-xr-x 1 root root 6966 Mar 26 13:56 ./autorun.sh
At first I thought that it may be that the script executes something that gave the error. I tried to replace /bin/sh with something like #/pathtorealsh/sh -xv, but I got no output — it seems the script can't even be executed.
I have not even an idea what could cause it.
|
Maybe your file system is mounted with noexec option set, so you can not run any executable files. From mount documentation:
noexec
Do not allow direct execution of any binaries on the mounted
filesystem. (Until recently it was possible to run binaries anyway
using a command like /lib/ld*.so /mnt/binary. This trick fails since
Linux 2.4.25 / 2.6.0.)
Try:
mount | grep noexec
Then check if your file system is listed in output.
If yes, you can solve this problem, by re-mounting file system with exec option:
mount -o remount,exec filesystem
| Running sh script: «Permission denied» despite the executable bit and root rights |
1,294,624,203,000 |
I am programming a Linux shell script that will print status banners during its execution only if the proper tool, say figlet, is installed (this is: reachable on system path).
Example:
#!/usr/bin/env bash
echo "foo"
figlet "Starting"
echo "moo"
figlet "Working"
echo "foo moo"
figlet "Finished"
I would like for my script to work without errors even when figlet is not installed.
What could be a practical method?
|
My interpretation would use a wrapper function named the same as the tool; in that function, execute the real tool if it exists:
figlet() {
if command -p figlet >/dev/null 2>&1
then
command figlet "$@"
else
:
fi
}
Then you can have figlet arg1 arg2... unchanged in your script.
@Olorin came up with a simpler method: define a wrapper function only if we need to (if the tool doesn't exist):
if ! command -v figlet > /dev/null; then figlet() { :; }; fi
If you'd like the arguments to figlet to be printed even if figlet isn't installed, adjust Olorin's suggestion as follows:
if ! command -v figlet > /dev/null; then figlet() { printf '%s\n' "$*"; }; fi
| Linux shell script: Run a program only if it exists, ignore it if it does not exist |
1,294,624,203,000 |
root user can write to a file even if its write permissions are not set.
root user can read a file even if its read permissions are not set.
root user can cd into a directory even if its execute permissions are not set.
root user cannot execute a file when its execute permissions are not set.
Why?
user$ echo '#!'$(which bash) > file
user$ chmod 000 file
user$ ls -l file
---------- 1 user user 12 Jul 17 11:11 file
user$ cat file # Normal user cannot read
cat: file: Permission denied
user$ su
root$ echo 'echo hello' >> file # root can write
root$ cat file # root can read
#!/bin/bash
echo hello
root$ ./file # root cannot execute
bash: ./file: Permission denied
|
In short, because the execute bit is considered special; if it's not set at all, then the file is considered to be not an executable and thus can't be executed.
However, if even ONE of the execute bits is set, root can and will execute it.
Observe:
caleburn: ~/ >cat hello.sh
#!/bin/sh
echo "Hello!"
caleburn: ~/ >chmod 000 hello.sh
caleburn: ~/ >./hello.sh
-bash: ./hello.sh: Permission denied
caleburn: ~/ >sudo ./hello.sh
sudo: ./hello.sh: command not found
caleburn: ~/ >chmod 100 hello.sh
caleburn: ~/ >./hello.sh
/bin/sh: ./hello.sh: Permission denied
caleburn: ~/ >sudo ./hello.sh
Hello!
| Why can't root execute when executable bits are not set? |
1,294,624,203,000 |
Supposing I am in the same folder as an executable file, I would need to type this to execute it:
./file
I would rather not have to type /, because / is difficult for me to type.
Is there an easier way to execute a file? Ideally just some simple syntax like:
.file
or something else but easier than having to insert the / character there.
Perhaps there is some way to put something in the /bin directory, or create an alias for the interpreter, so that I could use:
p file
|
It can be "risky" but you could just have . in your PATH.
As has been said in others, this can be dangerous so always ensure . is at the end of the PATH rather than the beginning.
| Execute as .test rather than ./test |
1,347,119,015,000 |
It's years I use Linux systems on a daily basis, and I never had major problems by updating a system when it was running, but I still wonder why this is possibile.
Let me make an example.
Suppose a program "A" from a certain package is running on a system. This program, at a certain point, needs to open another file ("B") from the same package. After that, program "A" closes "B" because it doesn't need it anymore. Suppose now I update the package "A" and "B" belong to. "A" is not directly affected by this operations, at least for the moment, since it is running in RAM and the update just replaced "A" on the hard disk. Suppose "B" has been replaced on the filesystem, too. Now "A" needs to read "B" again for some reason. The question is: is it possible that "A" could find an incompatible version of "B" and crash or malfunction in some other way?
Why nobody update their systems by rebooting with a live CD or some similar procedure?
|
Updating Userland is Rarely a Problem
You can often update packages on a live system because:
Shared libraries are stored in memory, not read from disk on each call, so the old versions will remain in use until the application is restarted.
Open files are actually read from file-descriptors, not the file names, so the file contents remain available to the running applications even when moved/renamed/deleted until the sectors are over-written or the file descriptors are closed.
Packages that require reloading or restarting are usually handled properly by the package manager if the package has been well-designed. For example, Debian will restart certain services whenever libc6 is upgraded.
Generally, unless you're updating your kernel and aren't using ksplice, then programs or services may need to be restarted to take advantage of an update. However, there's rarely a need to reboot a system to update anything in userland, although on desktops it's occasionally easier than restarting individual services.
See Also
http://en.wikipedia.org/wiki/Ring_%28computer_security%29#Supervisor_mode
| Why updating a running Linux system is not problematic? |
1,347,119,015,000 |
I understand that Linux uses shebang line to determine what interpreter to use for scripting languages, but how does it work for binaries?
I mean I can run Linux binaries, and having installed both wine and mono, Windows native and .NET binaries. And for all of them it's just ./binary-name (if not in PATH) to run it.
How does Linux determine that a given binary must be run as a Linux native binary, as a Windows native binary (using wine facilities) or as a Windows .NET binary (using mono facilities)?
|
In a word: binfmt_misc. It's a Linux-specific, non-portable, facility.
There are a couple of formats that are recognized by the kernel with built-in logic. Namely, these are the ELF format (for normal binaries) and the shebang convention (for scripts). (thanks to zwol for the following part of the answer). In addition, Linux recognizes a couple of esoteric or obsolete or compatibility builtin formats. You probably won't encounter them. They are a.out, "em86", "flat", and "elf_fdpic".
Everything else must be registered through the binfmt_misc system. This system allows you to register with the kernel a simple pattern check based on a magic number, and the corresponding interpreter.
| How does Linux determine what facilities to use to run a (non-text) binary? |
1,347,119,015,000 |
I was reading up on chmod and its octal modes. I saw that 1 is execute only. What is a valid use case for an execute only permission? To execute a file, one typically would want read and execute permission.
$ echo 'echo foo' > say_foo
$ chmod 100 ./say_foo
$ ./say_foo
bash: ./say_foo: Permission denied
$ chmod 500 ./say_foo
$ ./say_foo
foo
|
Shell scripts require the read permission to be executed, but binary files do not:
$ cat hello.cpp
#include<iostream>
int main() {
std::cout << "Hello, world!" << std::endl;
return 0;
}
$ g++ -o hello hello.cpp
$ chmod 100 hello
$ ./hello
Hello, world!
$ file hello
hello: executable, regular file, no read permission
Displaying the contents of a file and executing them are two different things. With shell scripts, these things are related because they are "executed" by "reading" them into a new shell (or the current one), if you'll forgive the simplification. This is why you need to be able to read them. Binaries don't use that mechanism.
For directories, the execute permission is a little different; it means you can do things to files within that directory (e. g. read or execute them). So let's say you have a set of tools in /tools that you want people to be able to use, but only if they know about them. chmod 711 /tools. Then executable things in /tools can be run explicitly (e. g. /tools/mytool), but ls /tools/ will be denied. Similarly, documents could be stored in /private-docs which could be read if and only if the file names are known.
| What is a valid use case for an "execute only" file permission? |
1,347,119,015,000 |
I have some doubts regarding *nix.
I don't know which type of executable file is ls, whether it is .sh
or .ksh or any other kind of system executable if it is, what
is that?
when I tried to see what is the source code of ls command looks like, it shows something unreadable, what method does *nix use to create these types of unreadable files and can I make my files similar to these files (like ls - unreadable).
|
You can determine the nature of an executable in Unix using the file command and the type command.
type
You use type to determine an executable's location on disk like so:
$ type -a ls
ls is /usr/bin/ls
ls is /bin/ls
So I now know that ls is located here on my system in 2 locations:/usr/bin/ls & /bin/ls. Looking at those executables I can see they're identical:
$ ls -l /usr/bin/ls /bin/ls
-rwxr-xr-x. 1 root root 120232 Jan 20 05:11 /bin/ls
-rwxr-xr-x. 1 root root 120232 Jan 20 05:11 /usr/bin/ls
NOTE: You can confirm they're identical beyond their sizes by using cmp or diff.
with diff
$ diff -s /usr/bin/ls /bin/ls
Files /usr/bin/ls and /bin/ls are identical
with cmp
$ cmp /usr/bin/ls /bin/ls
$
Using file
If I query them using the file command:
$ file /usr/bin/ls /bin/ls
/usr/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x303f40e1c9349c4ec83e1f99c511640d48e3670f, stripped
/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x303f40e1c9349c4ec83e1f99c511640d48e3670f, stripped
So these would be actual physical programs that have been compiled from C/C++. If they were shell scripts they'd typically present like this to file:
$ file somescript.bash
somescript.bash: POSIX shell script, ASCII text executable
What's ELF?
ELF is a file format, it is the output of a compiler such as gcc, which is used to compile C/C++ programs such as ls.
In computing, the Executable and Linkable Format (ELF, formerly called Extensible Linking Format) is a common standard file format for executables, object code, shared libraries, and core dumps.
It typically will have one of the following extensions in the filename: none, .o, .so, .elf, .prx, .puff, .bin
| How are system commands like ls created? |
1,347,119,015,000 |
Is there any way to set +x bit on script while creating?
For example I run:
vim -some_option_to_make_file_executable script.sh
and after saving I can run file without any additional movings.
ps. I can run chmod from vim or even from console itself, but this is a little annoying, cause vim suggests to reload file. Also it's annoying to type chmod command every time.
pps. It would be great to make it depending on file extension (I don't need executable .txt :-) )
|
I don't recall where I found this, but I use the following in my ~/.vimrc
" Set scripts to be executable from the shell
au BufWritePost * if getline(1) =~ "^#!" | if getline(1) =~ "/bin/" | silent !chmod +x <afile> | endif | endif
The command automatically sets the executable bit if the first line starts with "#!" or contains "/bin/".
| vim: create file with +x bit |
1,347,119,015,000 |
Whenever I run file on an ELF binary I get this output:
[jonescb@localhost ~]$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for
GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9,
not stripped
I'm just wondering what changed in Linux 2.6.9 that this binary couldn't run on 2.6.8?
Wasn't ELF support added in Linux 2.0?
|
glibc has a configure option called --enable-kernel that lets you specify the minimum supported kernel version. When object files are linked with that glibc build, the linker adds a SHT_NOTE section to the resulting executable named .note.ABI-tag that includes that minimum kernel version. The exact format is defined in the LSB, and file knows to look for that section and how to interpret it.
The reason your particular glibc was built to require 2.6.9 depends on who built it. It's the same on my system (Gentoo); a comment in the glibc ebuild says that it specifies 2.6.9 because it's the minimum required for the NPTL, so that's likely a common choice. Another one that seems to come up is 2.4.1, because it was the minimum required for LinuxThreads, the package used before NPTL
| Why does the file command say that ELF binaries are for Linux 2.6.9? |
1,347,119,015,000 |
How can I set file to be executable only to other users but not readable/writable, the reason for this I'm executing something with my username but I don't want to give out the password. I tried :
chmod 777 testfile
chmod a=x
chmod ugo+x
I still get permission denied when executing as another user.
|
You need both read and execute permissions on a script to be able to execute it. If you can't read the contents of the script, you aren't able to execute it either.
tony@matrix:~$ ./hello.world
hello world
tony@matrix:~$ ls -l hello.world
-rwxr-xr-x 1 tony tony 17 Jul 13 22:22 hello.world
tony@matrix:~$ chmod 100 hello.world
tony@matrix:~$ ls -l hello.world
---x------ 1 tony tony 17 Jul 13 22:22 hello.world
tony@matrix:~$ ./hello.world
bash: ./hello.world: Permission denied
| File permission execute only |
1,347,119,015,000 |
I'm confused about execute file permissions not behaving as I expect. Probably because my expectations are wrong. Anyway:
I have a script file, for simplicity is just called s, located in ~/bin. For the sake of this example, the file contains just the following lines:
#!/bin/zsh
echo "Test";
Very simple.
I navigate to the ~/bin directory, and chmod the file permissions of s to 400 - i.e., read-only for me only. No execute permission. So then I try executing the script by entering its path, giving this:
% ./s
zsh: permission denied: ./s
So far so good. The file can't be executed due to the wrong permissions. Bumping permissions up to 500 (execute permission granted) works fine too - with these permissions, the file executes fine:
% ./s
Test
This is all as expected. But then I chmod permissions back down to 400 (execute permission off again), try sourceing the file, and this happens:
% source s
Test
Although permissions are 400, the script executes.
So here's my question: why does ./s fail (like it should) but source s executes normally? Doesn't this defeat the whole purpose of the execute permission?
At 400 permissions, sh s and zsh s also work.
I'm sure I'm either doing or understanding something horribly wrong somewhere. Can someone point out where to me, and explain the difference between ./s, source s, sh s and zsh s?
|
When you run ./s, you tell the kernel to execute the program s. If you have execution permission, then the kernel reads the first few bytes of the file, sees the #! line so it knows that this is a script, and runs the interpreter, passing it the script name as its first argument. If you don't have execute permission, the kernel aborts the execution at the first step.
When you run zsh s, you execute zsh, and tell it to read the file called s and interpret it as commands. You aren't executing s, you're executing zsh. Same thing with sh s or cat s.
When you run source s, again, you tell zsh to read a file, so what matters is that you have read permission on it.
| Executing a script in zsh - file permissions |
1,347,119,015,000 |
I sometimes run into software that is not offered in .deb or .rpm but only as an executable.
For example Visual Studio Code, WebStorm or Kerbal Space Programm.
For this question, I will take Visual Studio Code as the point of reference.
The software is offered as a zipped package.
When unzipping, I'm left with a folder called VSCode-linux-x64 that contains a executable named Code.
I can double click Code or point to it with my terminal like /home/user/Downloads/VSCode-linux-x64/Code to execute it.
However, I would like to know if there is a proper way to install this applications.
What I want to achieve is:
one place where I can put all the applications/softwares that are
offered in this manner (executables)
terminal support (that means for
example: I can write vscode from any folder in my terminal and it
will automatically execute Visual Studio Code.
Additional info:
Desktop Environment: Gnome3
OS: Debian
EDIT:
I decided to give @kba the answer because his approach works better with my backup solution and besides that. Having script executing the binaries gives you the possibility to add arguments.
But to be fair, @John WH Smith approach is just as good as @kba's.
|
To call a program by its name, shells search the directories in the $PATH environment variable. In Debian, the default $PATH for your user should include /home/YOUR-USER-NAME/bin (i.e. ~/bin).
First make sure the directory ~/bin exists or create it if it does not:
mkdir -p ~/bin
You can symlink binaries to that directory to make it available to the shell:
mkdir -p ~/bin
ln -s /home/user/Downloads/VSCode-linux-x64/Code ~/bin/vscode
That will allow you to run vscode on the command line or from a command launcher.
Note: You can also copy binaries to the $PATH directories but that can cause problems if they depend on relative paths.
In general, though, it's always preferable to properly install software using the means provided by the OS (apt-get, deb packages) or the build tools of a software project. This will ensure that dependent paths (like start scripts, man pages, configurations etc.) are set up correctly.
Update: Also reflecting Thomas Dickey's comments and Faheem Mitha's answer what I usually do for software that comes as a tarball with a top-level binary and expects to be run from there:
Put it in a sane location (in order of standards-compliance /opt, /usr/local or a folder in your home directory, e.g. ~/build) and create an executable script wrapper in a $PATH location (e.g. /usr/local/bin or ~/bin) that changes to that location and executes the binary:
#/bin/sh
cd "$HOME/build/directory"
exec ./top-level-binary "$@"
Since this emulates changing to that directory and executing the binary manually, it makes it easier to debug problems like non-existing relative paths.
| How to install executables |
1,347,119,015,000 |
echo 'main(){}' | gcc -xc - -o /dev/stdout | ???
Is there a way to run the output binary on a unix-like system?
EDIT: I needed it to run the output of g++ in a sandboxed environment where I can't write any file (nothing malicious, I promise).
|
I don't believe this is possible. The exec(2) system call always requires a filename or absolute path (the filename is always a char*). posix_spawn also has similar requirements for a filename.
The closest you could do is pipe the output into a named pipe and try executing from the pipe. That may work, although the shell may refuse to execute any file that does not have the --x--x--x bits set. Create the pipe with mkfifo(1) and see if you can get it to work.
Another approach would be to write something that reads standard input, writes a file out to a temporay area, sets the --x bits on it, forks and execs then deletes the file. The inode and contents will remain until the program finishes executing but it won't be accessible through the file system. When the process terminates the inode will be released and storage will be returned to the free list.
EDIT: As Mat points out, the first approach won't work as the loader will attempt to demand-page in the executable, which will generate random seek traffic on the file, and this isn't possible on a pipe. This leaves some sort of approach like the second.
| Is there a way to execute a native binary from a pipe? |
1,347,119,015,000 |
I installed an application [ e.g. fdisk ]. But it required libraries for execution. I am looking for utility/tool which will help me to create a static binary from already installed binaries. So that I can use it anywhere.
The only reliable tools that I found is ErmineLight from
here , but this one is share-ware.
Is there any open-source software is available for the same ?
EDIT fdisk is just an example. I most of the time work on LFS, So If I have to use any utility, I need to follow the steps as
Download the source
configure
make
make test
make install,
So just-to-save time, I am looking for a solution in which I will make a static binary from debian or from fedora or from other distrbution, try it on LFS, and If it works fine or as per my requirement, I will go with source-code for compilation.
|
If fdisk is just an example and your goal is really to make static executables from dynamic executables, try Elf statifier. There's even a comparison with Ermine (by the Ermine vendor, so caveat (non-)emptor). Note that
If you have many executables, their combined size is likely to be more than the combined size of the dynamically-linked executables plus the necessary dynamic libraries.
There are features of GNU libc that may not work in a statically-linked executables, such as NSS (databases of user names, host names, etc.) and locale-related features.
If your goal is to have a small, portable suite of system tools, you're looking for BusyBox, a suite of core tools intended for embedded systems (including fdisk). You may also be interested in a smaller standard library than Glibc, for example dietlibc or µClibc.
| Creating Static Binary |
1,347,119,015,000 |
I created a Bash script which echoes "Hello World". I also created a test user, bob, using adduser.
Nobody has permission to execute that file as denoted by ls:
$ ls -l hello.sh
-rw-r--r-- 1 george george 19 Mai 29 13:06 hello.sh
As we can see from the above the file's owner is george where he has only read and write access but no execute access. But logged in as george I am able to execute the script directly:
$ . hello.sh
Hello World
To make matters worse, I log in as bob, where I have only read permission, but I am still able to execute the file:
$ su bob
Password:
$ . /home/george/testdir/hello.sh
Hello World
What's going on?
|
In your examples, you are not executing the files, but sourcing them.
Executing would be via
$ ./hello.sh
and for that, execution permission is necessary. In this case a sub-shell is opened in which the commands of the script file are executed.
Sourcing, i.e.
$ . hello.sh
(with a space in between) only reads the file, and the shell from which you have called the . hello.sh command then executes the commands directly as read, i.e. without opening a sub-shell. As the file is only read, the read permission is sufficient for the operation. (Also note that stating the script filename like that invokes a PATH search, so if there is another hello.sh in your PATH that will be sourced! Use explicit paths, as in . ./hello.sh to ensure you source "the right one".)
If you want to prevent that from happening, you have to remove the read permission, too, for any user who is not supposed to be using the script. This is reasonable anyway if you are really concerned about unauthorized use of the script, since
non-authorizeded users could easily bypass the missing execution permission by simply copy-and-pasting the script content into a new file to which they could give themselves execute permissions, and
as noted by Kusalananda, otherwise an unauthorized user could still comfortably use the script by calling it via
sh ./hello.sh
instead of
./hello.sh
because this also only requires read permissions on the script file (see this answer e.g.).
As a general note, keep in mind that there are subtle differences between sourcing and executing a script (see this question e.g.).
| How are users able to execute a file without permission? |
1,347,119,015,000 |
Looks like I cannot run any normal linux binaries if their name ends with .exe, any idea why?
$ cp /bin/pwd pwd
$ ./pwd
/home/premek
This is ok. But...
$ cp /bin/pwd pwd.exe
$ ./pwd.exe
bash: ./pwd.exe: No such file or directory
$ ls -la pwd.exe
-rwxr-xr-x 1 premek premek 39616 May 3 20:27 pwd.exe
$ file pwd.exe
pwd.exe: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=2447335f77d6d8c4245636475439df52a09d8f05, stripped
$ ls -la /lib64/ld-linux-x86-64.so.2
lrwxrwxrwx 1 root root 32 May 1 2019 /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.28.so
$ ls -la /lib/x86_64-linux-gnu/ld-2.28.so
-rwxr-xr-x 1 root root 165632 May 1 2019 /lib/x86_64-linux-gnu/ld-2.28.so
$ file /lib/x86_64-linux-gnu/ld-2.28.so
/lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped
|
I spent one day on this and of course 1 second after posting this question I remembered something like this existed to register .exe files for wine:
$ sudo cat /proc/sys/fs/binfmt_misc/wine
enabled
interpreter /usr/bin/wine
flags:
extension .exe
and /usr/bin/wine did not exist.
I got rid of it using:
$ sudo update-binfmts --remove wine /usr/bin/wine
update-binfmts: warning: no executable /usr/bin/wine found, but continuing anyway as you request
and it works now
| bash: ./*.exe: No such file or directory with executables named *.exe |
1,347,119,015,000 |
I had a question on a job interview:
How can you execute (run) the program with the user user1 without sudo privileges and without access to the root account:
$ whoami
user1
$ ls -l ~/binary_program
-rw-r--r-- 1 root root 126160 Jan 17 18:57 /home/user1/binary_program
|
Since you have read permission:
$ cp ~/binary_program my_binary
$ chmod +x my_binary
$ ./my_binary
Of course this will not auto-magically grant you escalated privileges. You would still be executing that binary as a regular user.
| Run a binary owned by root without sudo |
1,347,119,015,000 |
I want to call a python script script.py from the terminal by simply typing script. Is this possible? If so, how?
I know I can avoid typing python script.py by adding #!/usr/bin/env python to the top of the script, but I still have to add the suffix .py in order to run the script.
|
Unix/Linux file systems do not rely on extensions the way windows does. You should not need the .py at the end of a file to run it.
You can run the file by either calling it with the interpreter:
python ScriptFile
Or by marking it executable and defining the interpreter on the first line (e.g. #!/usr/bin/python).
If you are unable to execute the file with:
/Path/to/ScriptFile
check the permissions with
ls -l ScriptFile
You may need to add the executable flag and chmod it so it will execute for you.
If you are using custom scripts regularly you may want to make sure the directory you store them is added to the PATH environment variable.
| Running python script from terminal without .py extension |
1,347,119,015,000 |
I have an sh file that I would like to be able to open from the terminal at any time. I would like to type "studio" into the terminal, and have android studio open
I recall using ln -s to do this, but I have forgotten and have already wasted much time searching the web.
Also, in which directory is the created symbolic link kept in?
Here is the syntax from my effort, command not found
ricardo@debian:~$ ln -s /opt/android-studio/bin/studio.sh studio
ricardo@debian:~$ studio
bash: studio: command not found
|
The command you ran created a symbolic link in the current directory. Judging by the prompt, the current directory is your home directory. Creating symbolic links to executable programs in your home directory is not particularly useful.
When you type the name of a program, the shell looks for it in the directories listed in the PATH environment variable. To see the value of this variable, run echo $PATH. The directories are separated by a colon (:). A typical path is /home/ricardo/bin:/usr/local/bin:/usr/bin:/bin but there's a lot of variation out there.
You need to create this symbolic link in one of the directories listed in $PATH. If you want to make the command available to all users, create the link in /usr/local/bin:
sudo ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/studio
If you want to make the command available only to you (which is the only possibility if you don't have administrator privileges), create the link in ~/bin (the bin subdirectory of your home directory).
ln -s /opt/android-studio/bin/studio.sh ~/bin/studio
If your distribution doesn't put /home/ricardo/bin in your PATH (where /home/ricardo is your home directory), create it first with mkdir ~/bin, and add it to your PATH by adding the following line to ~/.profile (create the file if it doesn't exist):
PATH=~/bin:$PATH
The .profile file is read when you log in. You can read it in the current terminal by running . ~/.profile (this only applies to programs started from that terminal).
| How to use ln -s to create a command line shortcut? |
1,347,119,015,000 |
I have an executable binary; let's call it a.out. I can see the binary contains strings
$ strings a.out
...
/usr/share/foo
....
I need to change the string /usr/share/foo to /usr/share/bar. Can I just replace the string with sed?:
sed -i 's@/usr/share/foo@/usr/share/bar@' a.out
This looks like a safe thing to do. Will this also work when the strings are not the same length?
|
I don't know if your version of sed will be binary-clean or if will choke on what it thinks are really long lines in its input, but barring those issues, editing the string in-place should work. To see whether it does, compare the old and new versions with cmp -l. It should tell you whether or not the only three differences between the two files are those 3 bytes.
Editing strings in a compiled executable will indeed work if the strings are of the same length, but it will almost always also work if you are shortening the string, due to the way that strings work in C. In C strings, everything after the NUL terminator does not count, so if you write a new NUL terminator before the position of the old one, you will effectively shorten the string.
In general, there is no way you can lengthen a string using this hack.
| When can I edit strings in an executable binary? |
1,347,119,015,000 |
$ ls -l /usr/bin/sudo
-rwsr-xr-x 1 root root 136808 Jul 4 2017 /usr/bin/sudo
so sudo is runnable by any user, and any user who runs sudo will have root as the effective user ID of the process because the set-user-id bit of /usr/bin/sudo is set.
From https://unix.stackexchange.com/a/11287/674
the most visible difference between sudo and su is that sudo requires the user's password and su requires root's password.
Which user's password does sudo asks for? Is it the user represented by the real user ID of the process?
If yes, doesn't any user can gain the superuser privilege by running sudo and then providing their own password? Can Linux restrict that on some users?
Is it correct thatsudo asks for the password after execve() starts to execute main() of /usr/bin/sudo?
Since the euid of the process has been changed to root (because the set-user-id bit of /usr/bin/sudo is set), what is the point of sudo asking for password later?
Thanks.
I have read https://unix.stackexchange.com/a/80350/674, but it doesn't answer the questions above.
|
In its most common configuration, sudo asks for the password of the user running sudo (as you say, the user corresponding to the process’ real user id). The point of sudo is to grant extra privileges to specific users (as determined by the configuration in sudoers), without those users having to provide any other authentication than their own. However, sudo does check that the user running sudo really is who they claim to be, and it does that by asking for their password (or whatever authentication mechanism is set up for sudo, usually using PAM — so this could involve a fingerprint, or two-factor authentication etc.).
sudo doesn’t necessarily grant the right to become root, it can grant a variety of privileges. Any user allowed to become root by sudoers can do so using only their own authentication; but a user not allowed to, can’t (at least, not by using sudo). This isn’t enforced by Linux itself, but by sudo (and its authentication setup).
sudo does indeed ask for the password after it’s started running; it can’t do otherwise (i.e. it can’t do anything before it starts running). The point of sudo asking for the password, even though it’s root, is to verify the running user’s identity (in its typical configuration).
| Which user's password does `sudo` asks for? |
1,347,119,015,000 |
I want to find file types that are executable from the kernel's point of view. As far as I know all the executable files on Linux are ELF files. Thus I tried the following:
find * | file | grep ELF
However that doesn't work; does anybody have other ideas?
|
Later edit: only this one does what jan needs: thank you huygens;
find . -exec file {} \; | grep -i elf
| How to find executable filetypes? |
1,347,119,015,000 |
I'm trying to run a python script, on a headless Raspberry PI using winSCP and get the following error message:
Command '"./areadetect_movie_21.py"'
failed with return code 127 and error message
/usr/bin/env: python
: No such file or directory.
When I try and run from terminal, I get:
: No such file or directory.
I try a similar python script, in the same directory, with the same python shebang, the same permissions and using the same user pi, and it works.
I also do a ls and I can see the file, so I don't know why it will not run.
|
From AskUbuntu, answer by Gilles:
If you see the error “: No such file or directory” (with nothing before the colon), it means that your shebang line has a carriage return at the end, presumably because it was edited under Windows (which uses CR,LF as a line separator). The CR character causes the cursor to move back to the beginning of the line after the shell prints the beginning of the message and so you only get to see the part after CR which ends the interpreter string that's part of the error message.
Remove the CR: the shebang line needs to have a Unix line ending (linefeed only). Python itself allows CRLF line endings, so the CR characters on other lines don't hurt. Shell scripts on the other hand must be free of CR characters.
To remove the Windows line endings, you can use dos2unix:
sudo dos2unix /usr/local/bin/casperjs
or sed:
sudo sed -i -e 's/\r$//' /usr/local/bin/casperjs
If you must edit scripts under Windows, use an editor that copes with Unix line endings (i.e. something less brain-dead than Notepad) and make sure that it's configured to write Unix line endings (i.e. LF only) when editing a Unix file.
| No such file or directory but I can see it! |
1,347,119,015,000 |
I'm on a kali linux 64 bit.
I have created a python script which takes 2 arguments to start. I don't want to type out every time the exact same paths or search in the history of the commands I used in terminal. So I decided to create a simple script which calls the python script with its arguments.
#! /bin bash
python CreateDB.py ./WtfPath ./NoWtfPath/NewSystem/
It is the exact same command I would use in terminal. However, I get an error message when I try to execute the script file.
bash: ./wtf.sh: /bin: bad interpreter: Permission denied
wtf.sh has executable rights.
What is wrong?
|
You have a space instead of a forward slash here:
#! /bin bash
Should be:
#! /bin/bash
or simply
#!/bin/bash
(the first space is optional).
The shebang (#!) should be followed by the path to an executable, which may be followed by one argument, e.g.,
#!/usr/bin/env sh
In this case /usr/bin/env is the executable; see man env for details.
Just /bin refers to a directory.
| Bash Script Permission denied & Bad Interpreter |
1,347,119,015,000 |
Is it possible to check if given program was compiled with GNU gprof instrumentation, i.e. with '-pg' flag passed to both compiler and linker, without running it to check if it would generate a gmon.out file?
|
You could check for references to function mcount (or possibly _mcount or __mcount according to Implementation of Profiling). This function is necessary for profiling to work, and should be absent for non-profiled binaries.
Something like:
$ readelf -s someprog | egrep "\s(_+)?mcount\b" && echo "Profiling is on for someprog"
The above works on a quick test here.
| Detect if an ELF binary was built with gprof instrumentation? |
1,347,119,015,000 |
I have a script that works well when I ssh to the server to execute it myself, but has problems when Hudson, a continuous integration server, runs it.
I am automating tests on an embedded linux system (the target). The target is connected to Server A (RHEL 5) via serial and operated over minicom. Server B (FC 12) builds the tests that actually run on the target, and can ssh to Server A. Server C (RH) hosts Hudson, with Server B as a slave.
I've written a runscript (http://linux.die.net/man/1/runscript) script to do everything needed on the actual target; it boots the image, mounts a directory from Server B and executes the tests. A bash script on Server B invokes minicom with the runscript script along with some companion actions. I have a bash script on Server B which uses
ssh -t -t ServerA bashScript.sh
to get those tests run on the target. I am on Server C, I can get those tests run by ssh'ing to Server B and executing the script that ssh's to Server A which executes minicom with runscript. Whew. To review:
Server A: Hudson uses its slave mechanism to ssh to Server B.
Server B: kickOffTests.sh has the line ssh -t -t ServerA runTests.sh
Server A: runTests.sh calls a perl script which invokes minicom -S my.script ttyE1
Target, after booting: Mounts a directory from Server B, where the tests are, and enters that directory. It invokes yet another bash script, which runs the tests, which are compiled C executables.
Now, when I execute any of these scripts myself, they do what they should. However, when Hudson tries to do the same thing, over in the minicom session it complains about a line in the "yet another bash script" that invokes the C executable, ./executable, with ./executable: cannot execute binary file
I still have a lot to learn about linux, but I surmise this problem is a result of Hudson not connecting with a console. I don't know exactly what Hudson does to control its slave. I tried using the line export TERM=console in the configuration just before running kickOffTests.sh, but the problem remains.
Can anyone explain to me what is happening and how I can fix it? I cannot remove any of the servers from this equation. It may be possible to take minicom out of the equation but that would add an unknown amount of time to this project, so I'd much prefer a solution that uses what I already have.
|
The message cannot execute binary file has nothing to do with terminals (I wonder what led you to think that — and I recommend avoiding making such assumptions in a question, as they tend to drown your actual problem in a mess of red herrings). In fact, it's bash's way of expressing ENOEXEC (more commonly expressed as exec format error.
First, make sure you didn't accidentally try to run this executable as a script. If you wrote . ./executable, this tells bash to execute ./executable in the same environment as the calling script (as opposed to a separate process). That can't be done if the file is not a script.
Otherwise, this message means that ./executable is not in a format that the kernel recognizes. I don't have any definite guess as to what is happening though. If you can run the script on that same machine by invoking it in a different way, it can't just be a corrupt file or a file for the wrong architecture (it might be that, but there's more to it). I wonder if there could be a difference in the way the target boots (perhaps a race condition).
Here's a list of additional data that may help:
Output of file …/executable on server B.
Some information about the target, such as the output of uname -a if it's unix-like.
Check that the target sees the same file contents each time: run cksum ./executable or md5sum ./executable or whatever method you have on the target just before yet-another-bash-script invokes ./executable. Check that the results are the same in the Hudson invocation, in your successful manual invocation and on server B.
Add set -x at the top of yet-another-bash-script (just below the #!/bin/bash line). This will produce a trace of everything the script does. Compare the traces and report any difference or oddity.
Describe how the target is booting when you run the scripts manually and when Hudson is involved. It could be that the target is booted differently and some loadable module that provides the support for the format of ./executable doesn't get loaded (or is not loaded yet) in the Hudson invocations. You might want to use set -x in other scripts to help you there, and inspect the boot logs from the target.
| ./executable: cannot execute binary file |
1,347,119,015,000 |
I've installed the program Motion on one Linux machine (M1) and want the same program on another (M2).
There are various builds of this program, and I have forgotten which one I have used, so can I do a straight copy of the user/bin/motion file from M1 and place it in the user/bin/motion of M2?
I know where the configuration file is, so I'll move that across, but I'm not sure on what video drivers the working version of motion uses on M2; is there any way of finding out?
Is there a way that I can find out its dependencies?
|
For moving one program to other computer you have to move:
1) Executable file
A simple way to finding commands path is type command.
For example: type cal
cal is /usr/bin/cal
2) Library dependencies
You can find library dependencies with ldd command, But remember if you compiled a program from source the CPU Architecture of both server must be the same.
For example: ldd date
linux-vdso.so.1 => (0x00007fff83dff000)
librt.so.1 => /lib64/librt.so.1 (0x0000003784e00000)
libc.so.6 => /lib64/libc.so.6 (0x0000003783e00000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003784200000)
/lib64/ld-linux-x86-64.so.2 (0x0000003783a00000)
3) Configuration files
In new server you may need to tell the program to re-create the configuration files because that configuration files belongs to previous server.
4) Checking hardware dependency
For checking this I think you have to check program's website for supporting hardwares or you have to test program in new environment.
| Portability of an executable to another Linux machine |
1,347,119,015,000 |
I am trying to start a node.js application with on a low permissions user. All the files I know of are owned by the correct user and have permissions set reasonably well. I'm trying to use a script file to do this. I invoke the script with this command
sudo su - nodejs ./start-apps.sh
The shell script runs this command to start the app
cd "/home/nodejs/my-app"
npm start
npm start is documented here. It basically pulls the command to use out of the package.json file, which in our app looks like this:
// snip
"scripts": {
"start": "node-dev app"
},
And it spits out the error:
> [email protected] start /home/nodejs/my-app
> node-dev app
sh: 1: node-dev: Permission denied
npm ERR! [email protected] start: `node-dev app`
npm ERR! Exit status 126
That sh seems to be saying that it's reporting errors from the shell command. I don't think the problem is accessing the npm command itself, because if it were, the permission denied would be raised before any output from the npm command. But just to rule it out, here are the permissions for the npm command itself:
$ sudo find / ! \( -type d \) -name npm -exec ls -lah {} \;
-rwxr-xr-x 1 root root 274 Nov 12 20:22 /usr/local/src/node-v0.10.22/deps/npm/bin/npm
-rwxr-xr-x 1 root root 274 Nov 12 20:22 /usr/local/lib/node_modules/npm/bin/npm
lrwxrwxrwx 1 root root 38 Jan 14 07:49 /usr/local/bin/npm -> ../lib/node_modules/npm/bin/npm-cli.js
It looks like everyone should be able to execute it.
The permissions for node-dev look like this:
$ sudo find / ! \( -type d \) -name node-dev -exec ls -lah {} \;
-rwxr-xr-x 1 nodejs nodejs 193 Mar 3 2013 /home/nodejs/.npm/node-dev/2.1.4/package/bin/node-dev
-rw-r--r-- 1 nodejs nodejs 193 Mar 3 2013 /home/nodejs/spicoli-authorization/node_modules/node-dev/bin/node-dev
lrwxrwxrwx 1 root root 24 Jan 14 07:50 /home/nodejs/spicoli-authorization/node_modules/.bin/node-dev -> ../node-dev/bin/node-dev
I've already tried chowning the link to nodejs:nodejs, but the scrip experiences the same error.
Is there some file permissions problem I'm not seeing with the binary files? Or is this an npm/node-dev specific error?
|
The second node-dev is not executable, and the symlink points to that. Although the symlink is executable (symlinks are always 777), it is the mode of the file it points to that counts; note that calling chmod on the link actually changes the mode of the file it points to (symlink permissions never change).
So perhaps you need to add the executable bit for everyone:
chmod 755 /home/nodejs/spicoli-authorization/node_modules/.bin/node-dev
| Why is permission denied for npm start using node-dev? |
1,347,119,015,000 |
I recently bought a Raspberry Pi. I already have configured it, and I install a cross compiler for arm on my desktop (amd64). I compiled a simple "hello world" program and then I copy it from my desktop to my Pi with scp ./hello [email protected]:~/hello.
After login in my Pi I run ls -l hello and I get a normal response:
-rwxr-xr-x 1 david david 6774 Nov 16 18:08 hello
But when I try to execute it, I get the following:
david@raspberry-pi:~$ ./hello
-bash: ./hello: No such file or directory
david@raspberry-pi:~$ file hello
hello: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0x6a926b4968b3e1a2118eeb6e656db3d21c73cf10, not stripped
david@raspberry-pi:~$ ldd hello
not a dynamic executable
|
If ldd says it is not a dynamic executable, then it was compiled for the wrong target.
Obviously you did cross-compile it, as file says is a 32-bit ARM executable. However, there's more than one "ARM" architecture, so possibly your toolchain was configured incorrectly.
If you are using crosstool-NG, have a look at the .config for the value of CT_ARCH_ARCH. For the raspberry pi, it should be "armv6j"1 -- or at least, that's what's working for me. There are other specifics, but I think that should be enough. Unfortunately, if it's wrong, you now have to rebuild.
IMO getting a cross-compiler toolchain to work can be tedious and frustrating, but, presuming the host is not a significant factor (it shouldn't be), in this case it can be done. Crosstool-ng uses a TLI configurator, so if you end up having to try multiple builds, write down your choices each time so you know what worked.
1 I believe armv7 is a much more common arch (lots of phones and such), so if you are just using something you believe is a generic ARM cross-compiler, that's probably the issue. These numbers are confusing as, e.g., the pi's processor is an ARM11, but (as per that page), the ARM11 family of processors uses the ARMv6 architecture -- i.e. ARM11 is an implementation of ARMv6.
| “No such file or directory” when executing a cross-compiled program on a Raspberry Pi |
1,347,119,015,000 |
If I compile a program using gcc, and try to execute it from the bash shell, what is the exact sequence of steps followed by bash to execute it ?
I know fork(), execve(), loader, dynamic linker (and other things) are involved, but can someone give an exact sequence of steps and some suitable reading reference ?
Edit:
From the answers, it seems the question could imply many possibilities. I want to narrow down to a simple case:
(test.c just prints hello world)
$ gcc test.c -o test
$ ./test
What will be the steps in the above case (./test), specifically relating bash starting program in some child process, doing loading, linking etc. ?
|
Well, the exact sequence may vary, as there might be a shell alias or function that first gets expanded/interpreted before the actual program gets executed, and then differences for a qualified filename (/usr/libexec/foo) versus something that will be looked for through all the directories of the PATH environment variable (just foo). Also, the details of the execution may complicate matters, as foo | bar | zot requires more work for the shell (some number of fork(2), dup(2), and, of course, pipe(2), among other system calls), while something like exec foo is much less work as the shell merely replaces itself with the new program (i.e., it doesn't fork). Also important are process groups (especially the foreground process group, all PIDs of which eat SIGINT when someone starts mashing on Ctrl+C, sessions, and whether the job is going to be run in the background, monitored (foo &) or background, ignored (foo & disown). I/O redirection details will also change things, e.g., if standard input is closed by the shell (foo <&-), or whether a file is opened as stdin (foo < blah).
strace or similar will be informative about the specific system calls made along this process, and there should be man pages for each of those calls. Suitable system level reading would be any number of chapters from Stevens's "Advanced Programming in the UNIX Environment" while a shell book (e.g., "From Bash to Z Shell") will cover the shell side of things in more detail.
| How does a shell execute a program? |
1,347,119,015,000 |
Imagine I have a script foo. It should be run once when the user logs in and isn't needed after a successful run.
My question: Is it safe to remove the script file from within the script?
E.g.:
#!/bin/bash
# do something
...
# if successful
rm /path/to/foo
exit 0
|
It it safe to remove the shell file while running it,
since file handlers are not affected by (re)moving the corresponding file.
For more information, see here.
| Is it safe to remove a script file from within that script? |
1,347,119,015,000 |
One thing that puzzles me about desktop Linux, at least, is that just about everything is in my PATH. By everything, I mean every desktop application, including things like gnome-character-map and glchess. These have no command line interfaces to speak of, so I can't think of a case where I would be regularly launching these from a terminal - and, within that unlikely case, I can't imagine being inconvenienced by needing to type their full paths. It just seems cluttery, but maybe there's a good reason.
So, why does this happen? Is there any noteworthy impact on performance or maintainability?
|
All the commands that a user might want to run are in the PATH. That's what it's for. This includes commands that you run directly, commands that other people run directly, and commands that you or other people run indirectly because they are invoked by other commands. This is not limited to commands run from a terminal: commands run from a GUI are also searched in the command search path (again, that's what it's for).
Needing to type the full path would be terrible: you'd need to find out what the full path is! You'd need to keep track of whether it's in /usr/bin (which contains most programs shipped with the operating system), or in /usr/local/bin (which contains programs installed manually by the administrator, as well as programs that aren't part of the core OS on some unix variants), or in some other system-specific directory, or somewhere in the user's home directory.
It's difficult to answer about the “impact on performance or maintainability” because you don't say what you're comparing it to. If you're comparing with having to type the full path everywhere, that's a nightmare for maintainability: if you ever relocate a program, or if you want to install a newer version than what came with the OS or was installed by a system administrator, you have to replace that full path everywhere. The performance impact of looking the name in a few directories is negligible.
If you're comparing with Windows, it's even worse: some programs add not only the executable, but also all kinds of crap to the PATH, and you end up with a mile-long PATH variable that still doesn't include all programs, because many programs don't add themselves to the system PATH when you install them.
| Why do so many programs live in PATH? |
1,347,119,015,000 |
I am trying to run an executable file called i686-elf-gcc in my Kali Linux that I downloaded from this repository. It's a cross-compiler. The problem is that even though the terminal and a script that I wrote can both see that the file exists, when its time to actually execute it I get
No such file or directory error.Here is an image that explains it:
I have also to say that I have granted the necessary permissions to the executable.
|
Typically, the "unable to execute... No such file or directory" means that either the executable binary itself or one of the libraries it needs does not exist. Libraries can also need other libraries themselves.
To see a list of libraries required by a specified executable or library, you can use the ldd command:
$ ldd /usr/local/bin/i686-elf-gcc
If the resulting listing includes lines like
<library name> => not found
then the problem can be fixed by making sure the mentioned libraries are installed and in the library search path.
In this case, the libraries might be at /usr/local/lib or /usr/local/lib64, but for some reason that directory is not included in the library search path.
If you want the extra libraries to be available for specific programs or sessions only, you could use the LD_LIBRARY_PATH environment variable to identify the extra path(s) that should be searched for missing libraries. This will minimize the chance of conflicts with the system default libraries.
But if you want to add a library directory to the system default library search path, you should add it to /etc/ld.so.conf file, or create a /etc/ld.so.conf.d/*.conf file of your choice and then run the ldconfig command as root to update the library search cache.
For example, if the missing libraries are found in /usr/local/lib64 and /etc/ld.so.conf.d directory exists, you might want to create crosscompiler.conf file like this:
# echo "/usr/local/lib64" > /etc/ld.so.conf.d/crosscompiler.conf
# ldconfig
| Running executable file: No such file or directory [closed] |
1,347,119,015,000 |
I have a question about overwriting a running executable, or overwriting a shared library (.so) file that's in use by one or more running programs.
Back in the day, for the obvious reasons, overwriting a running executable didn't work. There's even a specific errno value, ETXTBSY, that covers this case.
But for quite a while now, I've noticed that when I accidentally try to overwrite a running executable (for example, by firing off a build whose last step is cc -o exefile on an exefile that happens to be running), it works!
So my questions are, how does this work, is it documented anywhere, and is it safe to depend on it?
It looks like someone may have tweaked ld to unlink its output file and create a new one, just to eliminate errors in this case. I can't quite tell if it's doing this all the time, or only if it needs to (that is, perhaps after it tries to overwrite the existing file, and encounters ETXTBSY). And I don't see any mention of this on ld's man page. (And I wonder why people aren't complaining that ld may now be breaking their hard links, or changing file ownership, and like that.)
Addendum: The question wasn't specifically about cc/ld (although that does end up being a big part of the answer); the question was really just "How come I never see ETXTBSY any more? Is it still an error?" And the answer is, yes, it is still an error, just a rare one in practice. (See also the clarifying answer I just posted to my own question.)
|
It depends on the kernel, and on some kernels it might depend on the type of executable, but I think all modern systems return ETXTBSY (”text file busy“) if you try to open a running executable for writing or to execute a file that's open for writing. Documentation suggests that it's always been the case on BSD, but it wasn't the case on early Solaris (later versions did implement this protection), which matches my memory. It's been the case on Linux since forever, or at least 1.0.
What goes for executables may or may not go as well for dynamic libraries. Overwriting a dynamic library causes exactly the same problem that overwriting an executable does: instructions will suddenly be loaded from the same old address in the new file, which probably has something completely different. But this is in fact not the case everywhere. In particular, on Linux, programs call the open system call to open a dynamic library under the hood, with the same flags as any data file, and Linux happily allows you to rewrite the library file even though a running process might load code from it at any time.
Most kernels allow removing and renaming files while they're being executed, just like they allow removing and renaming files while they're open for reading or writing. Just like an open file, a file that's removed while it's being executed will not be actually removed from the storage medium as long as it is in use, i.e. until the last instance of the executable exits. Linux and *BSD allow it, but Solaris and HP-UX don't.
Removing a file and writing a new file by the same name is perfectly safe: the association between the code to load and the open (or being-executed) file that contains the code goes by the file descriptor, not the file name. It has the additional benefit that it can be done atomically, by writing to a temporary file then moving that file into place (the rename system call atomically replaces an existing destination file by the source file). It's much better than remove-then-open-write since it doesn't temporarily put an invalid, partially-written executable in place
Whether cc and ld overwrite their output file, or remove it and create a new one, depends on the implementation. GCC (at least modern versions) and Clang do this, in both cases by calling unlink on the target if it exists then open to create a new file. (I wonder why they don't do write-to-temp-then-rename.)
I don't recommend depending on this behavior except as a safeguard since it doesn't work on every system (it may work on every modern systems for executables, but not for shared libraries), and common toolchains don't do things in the best way. In your build scripts, always generate files under a temporary file, then move them into place, unless you know the underlying tool does this.
| Overwriting a running executable or .so |
1,347,119,015,000 |
I am making quite some binaries, scripts etc that I want to install easily (using my own rpms). Since I want them accessible for everyone, my intuition would be to put them in /usr/bin;
no need to change PATH
however; my executables now disappear in a pool of all the others; how can I find back all the executables I put there in an easy way. I was thinking of:
a subdirectory in /usr/bin (I know I cannot do this; just to illustrate my thinking)
another directory (/opt/myself/bin) and linking each executable to /usr/bin (lots of work)
another directory (/opt/myself/bin) and linking the directory to /usr/bin (is this possible?)
what would be the "best, most linux-compliant way" to do this?
EDIT: we had a discussion on this in the company and came up with this sub-optimal option: put binaries in /usr/bin/company with a symbolic link from /usr/bin. I'm not thrilled with this solution (disussion ongoing)
|
If you bundle your binaries into your own RPMs then it's trivial to get a list of what they are and where they were installed.
Example
$ rpm -ql httpd| head -10
/etc/httpd
/etc/httpd/conf
/etc/httpd/conf.d
/etc/httpd/conf.d/README
/etc/httpd/conf.d/autoindex.conf
/etc/httpd/conf.d/userdir.conf
/etc/httpd/conf.d/welcome.conf
/etc/httpd/conf.modules.d
/etc/httpd/conf.modules.d/00-base.conf
I would suggest putting your executables in either /usr/bin or /usr/local/bin and rolling your own RPM. It's pretty trivial to do this and by managing your software deployment using an RPM you'll be able to label a bundle with a version number further easing the configuration management of your software as you deploy it.
Determining which RPMs are "mine"?
You can build your RPMs using some known information that could then be agreed upon prior to doing the building. I often build packages on systems that are owned by my domain so it's trivial to find RPMs by simply searching through all the RPMs that were built on host X.mydom.com.
Example
$ rpm -qi httpd
Name : httpd
Version : 2.4.7
Release : 1.fc19
Architecture: x86_64
Install Date: Mon 17 Feb 2014 01:53:15 AM EST
Group : System Environment/Daemons
Size : 3865725
License : ASL 2.0
Signature : RSA/SHA256, Mon 27 Jan 2014 11:00:08 AM EST, Key ID 07477e65fb4b18e6
Source RPM : httpd-2.4.7-1.fc19.src.rpm
Build Date : Mon 27 Jan 2014 08:39:13 AM EST
Build Host : buildvm-20.phx2.fedoraproject.org
Relocations : (not relocatable)
Packager : Fedora Project
Vendor : Fedora Project
URL : http://httpd.apache.org/
Summary : Apache HTTP Server
Description :
The Apache HTTP Server is a powerful, efficient, and extensible
web server.
This would be the Build Host line within the RPMs.
The use of /usr/bin/company?
I would probably discourage the use of a location such as this. Mainly because it requires all your systems to have their $PATH augmented to include it and is non-standard. Customizing things has always been a "right of passage" for every wannabee Unix admin, but I always discourage it unless absolutely necessary.
The biggest issue with customization's like this is that they become a burden in both maintaining your environment and in bringing new people up to speed on how to use your environment.
Can I just get a list of files from RPM?
Yes you can achieve this but it will require 2 calls to RPM. The first will build a list of packages that were built on host X.mydom.com. After getting this list you'll need to re-call RPM querying for the files owned by each of these packages. You can achieve this using this one liner:
$ rpm -ql $(rpm -qa --queryformat "%-30{NAME}%{BUILDHOST}\n" | \
grep X.mydom.com | awk '{print $1}') | head -10
/etc/pam.d/run_init
/etc/sestatus.conf
/usr/bin/secon
/usr/bin/semodule_deps
/usr/bin/semodule_expand
/usr/bin/semodule_link
/usr/bin/semodule_package
/usr/bin/semodule_unpackage
/usr/sbin/fixfiles
/usr/sbin/genhomedircon
| where to put binaries so they are always in path and can be found easily |
1,347,119,015,000 |
I'm using a shared server.
On that server different versions of Java are installed:
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-6-openjdk/jre/bin/java 1061 auto mode
* 1 /usr/lib/jvm/java-6-openjdk/jre/bin/java 1061 manual mode
2 /usr/lib/jvm/java-6-sun/jre/bin/java 63 manual mode
I would like to choose the Second options, but if I tried to do that it complains that I do not have the permissions (I'm not root).
Is there a way to do that in "user-space"?
Can the Root user make this preference works only for me?
|
On Debian and derivates, you should probably use update-java-alternatives. Anyway, all those tools are system related, not user related. If you want to use a different java, simply put those lines in your ~/.profile:
JAVA_HOME=/usr/lib/jvm/java-6-sun
JRE_HOME=/usr/lib/jvm/java-6-sun/jre
PATH=$JAVA_HOME/bin:"$PATH"
export JAVA_HOME JRE_HOME
| update-alternatives just for one user |
1,347,119,015,000 |
I've been using Optware to install packages on my ARM-based NAS for a while - the usual stuff like Transmission, Samba and others. However, I'd been having problems with Transmission hanging not long after starting up. I looked around for a solution for a while and finally discovered that the Optware feed I was using wasn't the one that had been set up for my NAS box. I switched the feeds and reinstalled all the packages but now I'm getting the following error when I try and run anything that was reinstalled:
$ smbd
-bash: /opt/sbin/smbd: No such file or directory
$ transmission-daemon
-bash: /opt/bin/transmission-daemon: No such file or directory
$ unrar
-bash: /opt/bin/unrar: No such file or directory
I checked /opt/bin and /opt/sbin and the executables are definitely there - so what's the real problem?
$ ldd /opt/bin/transmission-daemon
/usr/bin/ldd: line 116: /opt/bin/transmission-daemon: No such file or directory
$ file /opt/bin/transmission-daemon
/opt/bin/transmission-daemon: ELF 32-bit LSB executable, ARM, version 1, dynamically linked (uses shared libs), stripped
$ readelf - l /opt/sbin/smbd
readelf: error while loading shared libraries: libc.so.0: cannot open shared object file: No such file or directory
$ cat /proc/$$/maps
…
40084000-4019e000 r-xp 00000000 09:01 112594 /lib/libc-2.7.so
…
I'm not sure what any of that means but it proves the file is there, right? Or is this something to do with the shared libs?
|
When you fail to execute a file that depends on a “loader”, the error you get may refer to the loader rather than the file you're executing.
The loader of a dynamically-linked native executable is the part of the system that's responsible for loading dynamic libraries. It's something like /lib/ld.so or /lib/ld-linux.so.2, and should be an executable file.
The loader of a script is the program mentioned on the shebang line, e.g. /bin/sh for a script that begins with #!/bin/sh.
The error message is rather misleading in not indicating that the loader is the problem. Unfortunately, fixing this would be hard because the kernel interface only has room for reporting a numeric error code, not for also indicating that the error in fact concerns a different file. Some shells do the work themselves for scripts (reading the #! line on the script and re-working out the error condition), but none that I've seen attempt to do the same for native binaries.
ldd isn't working on the binaries either because it works by setting some special environment variables and then running the program, letting the loader do the work. strace wouldn't provide any meaningful information either, since it wouldn't report more than what the kernel reports, and as we've seen the kernel can't report everything it knows.
Here your reinstalled executables (smbd, transmission-daemon, etc) are requesting a loader that isn't present on your system. So your new feed isn't right for your system either.
This situation often arises when you try to run a binary for the right system (or family of systems) and superarchitecture but the wrong subarchitecture. Here you have ELF binaries on a system that expects ELF binaries, so the kernel loads them just fine. They are ARM binaries running on an ARM processor, so the instructions make sense and get the program to the point where it can look for its loader. But it's the wrong loader.
Now I'm getting into conjecture, but I suspect your new feed is for the wrong ARM ABI. The ABI is the common language for making inter-procedure calls, and in particular for calling library functions. On some processor architectures, there are several possible ABI choices, and you need to pick one and use it consistently. There are two ARM ABIs with Linux distributions out there: the traditional arm-elf ABI, and the newer EABI (arm-eabi). You can't mix ABIs on the same system, so you need to find a source of packages for your ABI (or reinstall your system for a different ABI).
| "No such file or directory" lies on Optware installed binaries |
1,347,119,015,000 |
Is it possible to get an executable to execute by default?
What I mean is this. I have an .sh file, which if I click on twice, it will show me this:
If I then click Execute, it does the right thing. Is it possible to get it to Execute without being shown the Execute File dialog box? So simply by double clicking the .sh file, it should do its thing without showing me the Execute File dialog box.
I am using Lubuntu/PCManFM if that info is needed.
|
You have to create a "Desktop Entry" like this (not
tested):
#!/usr/bin/env xdg-open
[Desktop Entry]
Encoding=UTF-8
Type=Application
X-Created-By= name
Icon= icon
Exec="path_of_file" %u
Name=name of program
| Default to "Execute" when double-clicking on a shell script in PCManFM |
1,347,119,015,000 |
A minimal ELF executable only requires the ELF header and at least one program header in order to be functional. However, when I run strip on a short executable, it decides not to throw out the section header table or the section strings section, keeping them around although they have no purpose (as far as I know) for the program's execution.
Is there a reason why these aren't removed by strip? Is there another utility which removes everything which isn't required for the executable to run? I've tried manually editing the code-golfing executable I was making to remove the section headers, and it appears to work fine, and be much smaller.
|
The documentation for GNU binutils strip alludes to the reason, but is not explicit, mentioning in the description of --only-keep-debug that
Note - the section headers of the stripped sections are preserved, including their sizes, but the contents of the section are discarded. The section headers are preserved so that other tools can match up the debuginfo file with the real executable, even if that executable has been relocated to a different address space.
That is, unless told to explicitly via the -R option, strip will retain section headers to help other programs (including gdb) do their job.
The page Correct use of the strip command (part of Reverse Engineering using the Linux Operating System) notes
Running the strip command on an executable is the most common program protection method. In its default operation, the strip command removes the symbol table and any debugging information from an executable. This is how it is typically used. However, there is still useful information that is not removed.
and goes on to enumerate several useful things that might be left behind — for analysis of a "stripped" executable.
In Learning Linux Binary Analysis, this is reiterated, commenting that section headers are normally only missing when someone has deliberately removed them, and that without section headers, gdb and objdump are nearly useless.
| Why doesn't `strip` remove section headers from ELF executables? |
1,347,119,015,000 |
I have seen this question on this site and this prompted me to ask this question . I want to know in Unix speak what is the difference between an executable and a shell script ?
|
An executable refers to any file with the executable bit set that could be executed (even if there are errors in the actual running of the program).
A shell script is a specific type of executable that is intended to be interpreted by a shell using the #! directive to specify an interpreter.
| In Unix speak what is the difference between a shell script and an executable? |
1,659,734,788,000 |
I'm using a program called node-webkit, but I can't start the program without specifying the full path to the executable file. Is there any way to associate a command (such as node-webkit) with an executable file on Linux, so that the full path to the file won't need to be specified?
|
A third option, perhaps least intrusive, is to add an alias in your .bashrc file. This file is a set of options for bash which it reads every time an instance of bash is started.
Open your .bashrc file with your file editor, for e.g gedit ~/.bashrc
Add the below line to the bottom of your .bashrc file
alias node-webkit=/path/to/node-webkit
Do source ~/.bashrc to be able to use the alias as if it were a command.
The way this works is like #define in C/C++, when you type node-webkit, it will be replaced with the right hand side of the alias definition, which here is the full path to the executable.
| Create a command for a Linux executable file |
1,659,734,788,000 |
When I'm on my Linux Box I use bash as a shell. Now I wondered how bash handles the execution of an ELF file, that is when I type ./program and program is an ELF file. I grepped the bash-4.3.tar.gz, there does not seem to be some sort of magic number parser to find out if the file is an ELF nor did I find an exec() syscall.
How does the process work? How does bash pass the execution of the ELF to the OS?
|
Bash knows nothing about ELF. It simply sees that you asked it to run an external program, so it passes the name you gave it as-is to execve(2). Knowledge of things like executable file formats, shebang lines, and execute permissions lives behind that syscall, in the kernel.
(It is the same for other shells, though they may choose to use another function in the exec(3) family instead.)
In Bash 4.3, this happens on line 5195 of execute_cmd.c in the shell_execve() function.
If you want to understand Linux at the source code level, I recommend downloading a copy of Research Unix V6 or V7, and going through that rather than all the complexity that is in the modern Linux systems. The Lions Book is a good guide to the code.
V7 is where the Bourne shell made its debut. Its entire C source code is just a bit over half the size of just that one C file in Bash. The Thompson shell in V6 is nearly half the size of the original Bourne shell. Yet, both of these simpler shells do the same sort of thing as Bash, and for the same reason. (It appears to be an execv(2) call from texec() in the Thompson shell and an execve() call from execs() in the Bourne shell's service.c module.)
| How does bash execute an ELF file? |
1,659,734,788,000 |
I noticed that when I install new application there are a few possible directories where the resulting binary file will be placed.
You can install with packaging manager, compile with make, easy_install for Python etc.
My $PATH looks as follows:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
Are there some conventions or rules that determine in which directory the resulting binary (or library) should be placed?
I noticed that when you compile the source code the result is often in /usr/local/bin/. Is this the rule?
Could you write the answer that explains the Unix philosophy (design and conventions decisions) according to binaries and why is it in this way (why not to use only one directory for all binaries)
|
There is no hard and fast rule but each distribution has its own logic behind putting things where they do. Generally, /bin is used for system binaries, /usr/bin for default applications that comes with the distribution and /usr/local/bin for things that are installed outside of the normal distribution. You can add a X11 to any of those for X11 binaries – /usr/X11/bin and /usr/local/X11/bin are quite common. Some software will install in /opt as well.
This article has a more in depth explanation for things in /. And of course, wikipedia has a page.
| Directories with binary files in Linux |
1,659,734,788,000 |
If you go to the VirusTotal link , there is a tab called file info(I think; mine is dutch). You'll see a header called
"Authenticode signature block and FileVersionInfo properties"
I want to extract the data under the header using Linux cli. Example:
Signature verification Signed file, verified signature
Signing date 7:43 AM 11/4/2014
Signers
[+] Microsoft Windows
[+] Microsoft Windows Production PCA 2011
[+] Microsoft Root Certificate Authority 2010
Counter signers
[+] Microsoft Time-Stamp Service
[+] Microsoft Time-Stamp PCA 2010
[+] Microsoft Root Certificate Authority 2010
I used the Camera.exe in Windows 10, to somehow extract the data.
I extracted the .exe file, and found a CERTIFICATE file in it, there is a lot of unreadable data, but also some text, I can read, that is - roughly - the same like the above output.
How can I extract Signatures from a Windows .exe file under Linux using cli
|
On Linux there's a tool called osslsigncode which can process Windows Authenticode signatures. Verifying a binary's signature produces output similar to what you show in your example; on a vcredist_x86.exe I have to hand I get:
$ osslsigncode verify vcredist_x86.exe
Current PE checksum : 004136A1
Calculated PE checksum: 004136A1
Message digest algorithm : SHA1
Current message digest : 0A9F10FB285BA0064B5537023F8BC9E06E173801
Calculated message digest : 0A9F10FB285BA0064B5537023F8BC9E06E173801
Signature verification: ok
Number of signers: 1
Signer #0:
Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation
Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Code Signing PCA
Number of certificates: 7
Cert #0:
Subject: /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority
Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority
Cert #1:
Subject: /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority
Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority
Cert #2:
Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Code Signing PCA
Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority
Cert #3:
Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation
Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Code Signing PCA
Cert #4:
Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/OU=nCipher DSE ESN:D8A9-CFCC-579C/CN=Microsoft Timestamping Service
Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Timestamping PCA
Cert #5:
Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/OU=nCipher DSE ESN:10D8-5847-CBF8/CN=Microsoft Timestamping Service
Issuer : /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Timestamping PCA
Cert #6:
Subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Timestamping PCA
Issuer : /OU=Copyright (c) 1997 Microsoft Corp./OU=Microsoft Corporation/CN=Microsoft Root Authority
Succeeded
You can also extract the signature:
osslsigncode extract-signature vcredist_x86.exe vcredist_x86.sig
| How can I extract Signatures data from a Windows `exe` file under Linux using cli |
1,659,734,788,000 |
I'm running Arch linux on my laptop, which is kernel 3.12.9 right now. Something has changed about the way the kernel maps in a dynamically-linked executable and I can't figure it out. Here's the example:
% /usr/bin/cat /proc/self/maps
...
00400000-0040b000 r-xp 00000000 08:02 1186756 /usr/bin/cat
0060a000-0060b000 r--p 0000a000 08:02 1186756 /usr/bin/cat
0060b000-0060c000 rw-p 0000b000 08:02 1186756 /usr/bin/cat
00d6c000-00d8d000 rw-p 00000000 00:00 0 [heap]
7f29b3485000-7f29b3623000 r-xp 00000000 08:02 1182988 /usr/lib/libc-2.19.so
...
My question is: what is the third mapping from /usr/bin/cat?
Based on readelf -l /usr/bin/cat, there's a loadable segment of 0x1f8 bytes that should map at 0x400000. There's a loadable segment of 0xae10 bytes at 0x60ae10. Those two pieces of file correspond to the 00400000-0040b000 mapping, and the 0060a000-0060b000 mapping. But the third mapping, which claims to be at a file offset of 0xb000 bytes, doesn't seem to correspond to any Elf64_Phdr. In fact, the elf header only has 2 PT_LOAD segments.
I read through fs/binfm_elf.c in the kernel 3.13.2 source code, and I don't see that the kernel maps in anything other than PT_LOAD segments. If I run strace -o trace.out /usr/bin/cat /proc/self/maps, I don't see any mmap() calls that would map in a piece of /usr/bin/cat, so that 3rd piece is mapped in by the kernel.
I ran the same command (cat /proc/self/maps) on a RHEL server that was running kernel 2.6.18 + RH patches. That only shows 2 pieces of /usr/bin/cat mapped into memory, so this might be new with kernel 3.x.
|
I finally figured this out. The kernel does map only 2 segments. The third piece is a portion of one of the two loaded by the kernel. The run time linker, the program named in the INTERP pheader, which is /usr/lib/ld-2.24.so for me right now, changes the permissions on the mappings using mprotect() so that there are read/write global variables, read-only global variables, and a read/execute text segment. You can see this happen using strace, but it's easy to miss, as it's only a single mprotect() call.
It wasn't a kernel change that caused this, it was a GNU lib C change.
| /proc/self/maps - 3rd mapped piece of file? |
1,659,734,788,000 |
Say I have a file hello:
#!/bin/sh
echo "Hello World!"
Provided the executable bit is set on that file, I can execute it by entering its path on the prompt:
$ ./hello
Hello World!
Is there a more explicit equivalent to the above? Something akin to:
$ execute hello
I know I can pass hello as an argument to /bin/sh, but I'm looking for a solution that automatically uses the interpreter specified in the shebang line
My use case for this is to execute script files that do not have the executable flag set. These files are stored in a git repository, so I would like to avoid setting their executable flag or having to copy them to another location first.
|
You can use perl:
perl hello
From perl docs:
If the #! line does not contain the word "perl" nor the word "indir", the program named after the #! is executed instead of the Perl interpreter. This is slightly bizarre, but it helps people on machines that don't do #!, because they can tell a program that their SHELL is /usr/bin/perl, and Perl will then dispatch the program to the correct interpreter for them.
(via)
| Equivalent of executing a file (with shebang line) by entering its path? |
1,659,734,788,000 |
What type of parameter/flag can I use with the unix find command so that I search executables?
(if this question is better suited for another stackexchange forum, I welcome you telling me so)
p.s. If you know of one, I've been looking for a detailed (non-beginner) tutorial/screencast about grep and/or find.
|
Portably, the following command looks for regular files that are executable by their owner:
find . -perm -700 -type f
With GNU find ≥4.3, you can use -executable instead of -perm -700 to look for files that are executable by you.
| Find command: Searching for executable files |
1,659,734,788,000 |
I have a process running very long time.
I accidentally deleted the binary executable file of the process.
Since the process is still running and doesn't get affected, there must be the original binary file in somewhere else....
How can I get recover it? (I use CentOS 7, the running process is written in C++)
|
It could only be in memory and not recoverable, in which case you'd have to try to recover it from the filesystem using one of those filesystem recovery tools (or from memory, maybe). However!
$ cat hamlet.c
#include <unistd.h>
int main(void) { while (1) { sleep(9999); } }
$ gcc -o hamlet hamlet.c
$ md5sum hamlet
30558ea86c0eb864e25f5411f2480129 hamlet
$ ./hamlet &
[1] 2137
$ rm hamlet
$ cat /proc/2137/exe > newhamlet
$ md5sum newhamlet
30558ea86c0eb864e25f5411f2480129 newhamlet
$
With interpreted programs, obtaining the script file may be somewhere between tricky and impossible, as /proc/$$/exe will point to perl or whatever, and the input file may already have been closed:
$ echo sleep 9999 > x
$ perl x &
[1] 16439
$ rm x
$ readlink /proc/16439/exe
/usr/bin/perl
$ ls /proc/16439/fd
0 1 2
Only the standard file descriptors are open, so x is already gone (though may for some time still exist on the filesystem, and who knows what the interpreter has in memory).
| How to recover the deleted binary executable file of a running process |
1,659,734,788,000 |
It's a well-known fact that if one wants to execute a script in shell, then the script needs to have execute permissions:
$ ls -l
total 4
-rw-r--r-- 1 user user 19 Mar 14 01:08 hw
$ ./hw
bash: ./hw: Permission denied
$ /home/user/hw
bash: /home/user/hw: Permission denied
$
However, it is possible to execute this script with bash <scriptname>, sh <scriptname>, etc:
$ bash hw
Hello, World!
$
This means that basically one can execute a script file, even if it only has read permissions. This maybe is a silly question, but what is the point of giving execute permissions to a script file? Is it solely because in order for a program to run it needs to have execute permissions, but it actually doesn't add security or any other benefits?
|
Yes, you can use bash /path/to/script, but scripts can have different interpreters. Its possible your script was written to work with ksh, zsh, or maybe even awk or expect. Thus you have to know what interpreter to use to call the script with. By instead making a script with a shebang line (that #!/bin/bash at the top) executable, the user no longer needs to know what interpreter to use.
It also allows you to put the script in $PATH and call it like a normal program.
| reason to add execute permissions to a shell script |
1,659,734,788,000 |
I have a unix executable file located in a directory I generated. I believe I need to get this directory in my $PATH so that the unix executable is executable, but the documentation for the source code says that I need to edit my shell configuration file to add $home/meme/bin to my shell's path.
|
If you want to be able to execute a program by typing its name on the command line, the program executable must be in one of the directories listed in the PATH environment variable. You can see the current value of the variable like this ($ is your prompt, and the value below is an example):
$ echo $PATH
/home/drbunsen/bin:/usr/local/bin:/usr/bin:/bin
You have several choices; while #1 and #2 involve less advanced concepts, I recommend #3 which is less work in practice:
You can put the executable in a directory that's already on your PATH. For example, if /home/drbunsen/bin is already on your PATH, you can put the executable there. Or you can put the executable in /usr/local/bin if you want it to be available for all users.
You can add the directory where the executable is in your PATH. Edit the file ~/.profile (~/ means that the file is in your home directory) (create the file if it doesn't exist). Add a line like this:
PATH=$PATH:$HOME/meme/bin
(Note that it's $HOME, not $home; unix is generally case-sensitive. You can also write ~/meme/bin, ~ is a synonym for $HOME when it's at the beginning of a file path.) The change will take effect the next time you log in. You can type this same line in a terminal, and it will affect the shell running in that terminal and any program launched from it.
The approach I recommend is to keep the executable with the other files that are part of the program, in a directory of its own, but not to change PATH either.
Keeping the executable in $HOME/meme has the advantage that if you ever want to remove or upgrade the program, everything is in one place. Some programs even require this in order to find the files they use. Not changing PATH has the advantage that installing and uninstalling programs is less work.
To get the best of both worlds, create a symbolic link in a directory on your PATH, pointing to the actual executable. From the command line, run a command like this:
cd ~/bin
ln -s ../meme/bin/* .
That's assuming that ~/bin is already on your PATH; if it's not, add it through ~/.profile as indicated above. Pick another location if you like. Now making programs available is a matter of creating the symbolic links; making them unavailable is a matter of removing the symbolic links; and you can easily track what programs you've installed manually and where they live by looking at the symbolic links.
| Problem with $PATH and executable file |
1,659,734,788,000 |
Currently setting up xpra, which wants to run an X instance as non-root using the dummy driver, but the system Xorg binary is SUID. Since the system auto-updates, I would prefer not making and maintaining a non-SUID copy of the binary. I'm also trying to avoid using a hack like copy-execute-delete, e.g. in the tmp directory (would prefer to make it a clean one-liner, which I instinctively believe should be possible, though there may be some subtle security hole this capability would open). Symlinks would be acceptable, though AFAIK they don't provide permission bit masking capabilities.
My current best solution is a nosuid bind mount on the bin directory, which seems to do the trick, but as above I'd still prefer a solution that doesn't leave crunk in my system tree/fstab (e.g. some magic environment variable that disables suid the same way a nosuid mount does, or some commandline execute jutsu that bypasses the suid mechanism).
Any thoughts?
|
If X is dynamically linked, you could call the dynamic linker like:
/lib/ld.so /path/to/X
(adapt ld.so to your system (like /lib/ld-linux.so.2).
Example:
$ /lib64/ld-linux-x86-64.so.2 /bin/ping localhost
ping: icmp open socket: Operation not permitted
| Running setuid binary temporarily without setuid? |
1,659,734,788,000 |
I have seen many tutorials saying that the bin directory is used to store binary files, meaning there is only 0 and 1 in the files in that directory.
However, in many cases, I see files in bin that are not only 0 and 1.
For example, the django-admin.py under the xx/bin/ directory:
#!/usr/bin/env python
from django.core import management
if __name__ == "__main__":
management.execute_from_command_line()
|
No, a bin directory is not for storing only binary files. It's for keeping executable files, primarily.
Historically, before scripts written in various scripting languages became more common, bin directories would have contained mainly binary (compiled or assembled) non-text files, as opposed to source code. The main thing about the files in bin nowadays is that they are executable.
An executable script is a text file, interpreted by an interpreter. The script in your example is a Python script. When you run it, the python interpreter (which is another executable file somewhere in your $PATH) will be used to run it.
Also, as an aside, a text file is as much a file made up of zeroes and ones as a binary file is.
| Is the bin/ directory for storing binary files? |
1,659,734,788,000 |
When you run ./myscript.sh is that considered as "access" time?
I need to know the last time a script was run, but I'm not sure if this counts as mtime, ctime or atime (differences described here).
|
As explained in the answer you linked to, that depends on your settings. In principle, atime will change each time a file is read and in order to run a script, you need to read it. So yes, normally, the atime will change each time the script is executed. This is easily demonstrated by checking the current atime, running the script and then checking it again:
$ printf '#!/bin/sh\necho "running"\n' > ~/myscript.sh
$ stat -c '%n : %x' ~/myscript.sh
/home/terdon/myscript.sh : 2016-02-23 10:36:49.349656971 +0200
$ chmod 700 ~/myscript.sh
$ stat -c '%n : %x' ~/myscript.sh ## This doesn't change atime
/home/terdon/myscript.sh : 2016-02-23 10:36:49.349656971 +0200
$ ~/myscript.sh
running
$ stat -c '%n : %x' ~/myscript.sh ## Running the script does
/home/terdon/myscript.sh : 2016-02-23 10:38:20.954893580 +0200
However, if the script resides on a filesystem that is mounted with the noatime or relatime options (or any of the other possible options that can affect how atime is modified), the behavior will be different:
noatime
Do not update inode access times on this filesystem (e.g., for
faster access on the news spool to speed up news servers). This
works for all inode types (directories too), so implies nodira‐
time.
relatime
Update inode access times relative to modify or change time.
Access time is only updated if the previous access time was ear‐
lier than the current modify or change time. (Similar to noat‐
ime, but it doesn't break mutt or other applications that need
to know if a file has been read since the last time it was modi‐
fied.)
Since Linux 2.6.30, the kernel defaults to the behavior provided
by this option (unless noatime was specified), and the stricta‐
time option is required to obtain traditional semantics. In
addition, since Linux 2.6.30, the file's last access time is
always updated if it is more than 1 day old.
You can check what options your mounted systems are using by running the command mount with no arguments. The tests I show above were run on a filesystem that was mounted using the relatime option. With this option, atime is updated if i) the current atime is older than the current modification or change time or ii) it is hasn't been updated for more than a day.
So, on a system with relatime, the atime is not changed when a file is accessed if the current atime is newer than the current modification time:
$ touch -ad "+2 days" file
$ stat --printf 'mtime: %y\natime: %x\n' file
mtime: 2016-02-23 11:01:53.312350725 +0200
atime: 2016-02-25 11:01:53.317432842 +0200
$ cat file
$ stat --printf 'mtime: %y\natime: %x\n' file
mtime: 2016-02-23 11:01:53.312350725 +0200
atime: 2016-02-25 11:01:53.317432842 +0200
The atime is always changed on access if it is more than a day old. Even if the modification time is older:
$ touch -ad "-2 days" file
$ touch -md "-4 days" file
$ stat --printf 'mtime: %y\natime: %x\n' file
mtime: 2016-02-19 11:03:59.891993606 +0200
atime: 2016-02-21 11:03:37.259807129 +0200
$ cat file
$ stat --printf 'mtime: %y\natime: %x\n' file
mtime: 2016-02-19 11:03:59.891993606 +0200
atime: 2016-02-23 11:05:17.783535537 +0200
So, on most modern Linux systems, atime will only be updated every day or so, unless the file has been modified since it was last accessed.
| Check last time .sh file used |
1,659,734,788,000 |
One thing that has been puzzling me for some time is this:
% which halt
/sbin/halt
% file /sbin/halt
/sbin/halt: symbolic link to `reboot'
However, executing sudo halt does, of course, not reboot the system. Why is that?
There are several other programs working that way, for example pdflatex.
|
Every program can see the full command line that was used to run it (except for wildcards and variables, which the shell expands).
In a C program, the command line is stored in argv, which is short for argument vector.
The progam's name is the first element of argv, i.e. argv[0].
Clearly in the case of halt and reboot, the program is changing its behavior based on argv[0].
From bash, you can see the full command line used to run a program using ps -p <pid> -o cmd or cat /proc/<pid>/cmdline.
Note that there is another type of link called a hard link that will have the same effect. On my system for example, sudo and sudoedit are the same file with two different names, and different behaviors.
ls -i can help you find those commands, e.g.:
$ ls -il | awk '$3 != 1 { print }'
total 156872
2491111 -rwsr-xr-x 2 root root 127560 2011-01-20 05:03 sudo
2491111 -rwsr-xr-x 2 root root 127560 2011-01-20 05:03 sudoedit
See man ln for more details about hard links if you're not familiar with them.
| Why do some symbolic links affect program behavior? |
1,659,734,788,000 |
I'm trying to understand Linux file system, and one of the question is:
1- Why there are multiple folders for executable files: /usr/bin, /usr/sbin/ and /usr/local/bin? Is there any differences between them ?
2- If I have an executable file and I want to add it to my system, which of the third latter locations is the best for me ?
|
Run man hier from the command line to get the answer to your first question.
It depends. See /usr/bin vs /usr/local/bin on Linux
| Why there are multiple folders for executable files in Linux? [duplicate] |
1,659,734,788,000 |
I have downloaded this script named, pyAES.py and put it in a folder name codes, inside a Desktop directory of my Linux,
According to this example,
http://brandon.sternefamily.net/2007/06/aes-tutorial-python-implementation/
When I type,
./pyAES.py -e testfile.txt -o testfile_encrypted.txt
the file pyAES.py should be executed.
but I am getting this error,
pi@raspberrypi ~/Desktop/Codes $ pyAES.py
-bash: pyAES.py: command not found
the output of ls -l command is,
pi@raspberrypi ~/Desktop/Codes $ ls -l
total 16
-rw-r--r-- 1 pi pi 14536 Oct 8 10:44 pyAES.py
Here is the output after chmod +x
pi@raspberrypi ~/Desktop/Codes $ chmod +x pyAES.py pi@raspberrypi ~/Desktop/Codes $
pi@raspberrypi ~/Desktop/Codes $ pyAES.py
-bash: pyAES.py: command not found
pi@raspberrypi ~/Desktop/Codes $
and the command, chmod +x pyAES.py && ./pyAES.py gives the following error,
-bash: ./pyAES.py: /usr/bin/python2: bad interpreter: No such file or directory
I have also tried moving the file in /usr/bin directory and then executing it,
pi@raspberrypi /usr/bin $ pyAES.py
-bash: /usr/bin/pyAES.py: /usr/bin/python2: bad interpreter: No such file or directory
pi@raspberrypi /usr/bin $
I can see the file is present in /usr/bin directory but it is still giving an error that No such file or directory.
I want to know why the Linux terminal is not executing the python script ?
|
It seems you have a badly-written shebang line. From the error you're getting:
-bash: /usr/bin/pyAES.py: /usr/bin/python2: bad interpreter: No such file or directory
I'd say you should set the first line of /usr/bin/pyAES.py to
#!/correct/path/to/python
where the /correct/path/to/python can be found from the output of:
type -P python
It's /usr/bin/python (not /usr/bin/python2) on my system.
| Running python script from Linux Terminal |
1,659,734,788,000 |
I hit again on this strange behaviour of the system. I'm running Debian 6.0.6 and had quite some troubles executing a script directly from CD/DVD. Finally I had to use:
sh /media/cdrom/command
to run it. What is the big deal of having to resort to sh?! What if the script relies on bash features? Really annoying and not much added to security in my opinion
Does anybody know of a good reason for that behaviour?
PS:
if you try to run it directly with ./... you get an error that does not suggest any hint to the issue (filesystem mounted noexec):
bash: ./media/cdrom/command: No such file or directory
If you run it as bash /media/cdrom/command you get the same error (I think verification of mount options are verified by bash even for commands passed as parameters on the command line.
Permanent solution is to add exec to the mount options in /etc/fstab such as:
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
|
Filesystems are often mounted noexec,nosuid by default to boost the security a bit. Hence even though you see the executable bit on the file set, kernel will refuse to run it. By calling it in the form of interpreter path/to/script you are requesting the system to run interpreter, which in turn receives path/to/script as an argument and parses it thus bypassing the filesystem imposed restriction (you might be able to achieve the same effect with compiled executables with: /lib/ld-linux.so.1 path/to/executable).
Hence one option is mount -o exec .... You might want to put the option into /etc/fstab - usually by replacing the defaults option with defaults,exec. However, unless you really know what you are doing, I would advise against this.
As for the BASH specifics, I believe bash will interpret those even when running as sh. And you are definitely free to invoke it as bash path/to/script.
| Why do I have to use sh to execute scripts from CD/DVD media? |
1,659,734,788,000 |
I found a command rename.ul on my Ubuntu machine. It comes from util-linux package.
It is odd for me because I rarely see executables with extension. In addition, it seems unnecessary because the file is compiled.
Are there any historical or technical reasons on it?
I'm also confused because I could not find file format associated this extension.
|
The extension is to avoid conflict with the multitude of rename commands otherwise available on Debian. This change was made in 2007 in response to Debian bug #439647:
/usr/bin/rename is managed by the alternatives system (with Perl's
version the default). util-linux 2.13~rc3-8 installs its own binary
there, instead of registering it as an alternative.
In response, the util-linux rename was renamed to be rename.ul.
Even so, rename.ul syntax is so far different from the Perl variants that it's not added to the alternatives system by default (see Debian bug #439935).
| Why /usr/bin/rename.ul has an extension? |
1,659,734,788,000 |
I have a directory test with three files, ls -l test:
total 8
-rw-r--r-- 1 mb mb 16 Jul 25 11:12 regular_file
-rwxr-xr-x 1 mb mb 19 Jul 25 11:02 script.sh
lrwxrwxrwx 1 mb mb 12 Jul 25 11:14 symlink -> regular_file
It contains a regular file, a symbolic link, and an executable script.
After archiving this directory with tar -czf test.tgz test/, I wanted to extract the three files with 7-Zip:
7z x -tgzip test.tgz && 7z x -ttar test.tar
Unfortunately, 7-Zip doesn't produce the original files: The script loses its executable bit and symlink is not a symbolic link anymore, rather a file containing the text regular_file.
total 12
-rw-r--r-- 1 mb mb 16 Jul 25 11:12 regular_file
-rw-r--r-- 1 mb mb 19 Jul 25 11:02 script.sh
-rw-r--r-- 1 mb mb 12 Jul 25 12:16 symlink
On the other hand, I can extract the files with their permissions and the symlink intact using
tar -xzf test.tgz
Is there a way to make 7-Zip extract the files as they were before archiving them?
7-Zip version is 16.02. I'm on Arch Linux 5.7.7.
Here's the archive created with tar.
|
This seems to be a limitation of 7-Zip, judging from these bug reports:
#1302 .tar archives fail to extract properly
#1188 symlinks in tar archives are extracted to 0 byte files
For now, I'll keep using tar -xzf test.tgz to extract the files.
| Preserve file permissions and symlinks in archive with 7-Zip |
1,659,734,788,000 |
Many guides on the internet recommend setting the nosuid and noexec options, for example on the /tmp mount point. But doesn't noexec imply nosuid? What cannot get executed cannot make use of the suid bit, right?
|
Thanks for the link LJKims, it helps me to answer my own question. I forgot that the suid/sgid bit can also be set for directories.
According to the GNU coreutils documentation files and directories that are created in a suid-directory inherit the owner of the directory (sgid-directories inherit the group obviously). So, if you want to avoid this behaviour, setting both noexec and nosuid on a mount point makes sense.
For completeness: in my tests on a current Debian, the suid bit on directories takes no effect, but only the sgid bit makes files/directories inherit the group of the directory.
# mkdir /test
# chmod 6777 /test
# ls -ld /test
drwsrwsrwx 2 root root 4096 Jun 10 18:50 /test
$ mkdir /test/foo; touch /test/bar
$ ls -l /test
-rw-r--r-- 1 user root 0 Jun 10 18:51 bar
drwxr-sr-x 2 user root 4096 Jun 10 18:51 foo
Edit:
For completeness: The nosuid mount option does not affect sgid-directories (on Debian 8 at least).
# mount -o loop,nosuid test.img /test
# mkdir /test/foo
# chmod 2777 /test/foo
$ touch /test/foo/bar; mkdir /test/foo/baz
$ ls -l /test/foo
-rw-r--r-- 1 user root 0 Jun 12 09:46 bar
drwxr-sr-x 2 user root 4096 Jun 12 09:46 baz
| Does the noexec mount option imply nosuid? |
1,659,734,788,000 |
I know how to monitor a process. Commands like top and so forth can monitor the CPU time and memory usage for a given process instance.
But say I expect a given executable to be run several times in the next hour, and I want to measure how many times it is run and the CPU time it has consumed. What's a command for that?
|
You could do something like:
mv my-executable my-executable.bin
And create my-executable as a wrapper script that does:
#! /bin/bash -
{ time "$0.bin" "$@" 2>&3 3>&-; } 3>&2 2>> /tmp/times.log
The script could add more information to the log like the time it was started, by whom, the arguments it was passed...
BSD process accounting, at least on Linux does report CPU time (user + sys), though not in a cumulative way like time does (children processes CPU time is not accounted for the parent)
| How to monitor all executions of an executable over a time period |
1,659,734,788,000 |
I would like to use install command in order to create a new executable file with pre-populated content (e.g. with single pwd command in it).
So I've extended this example which creates a new empty executable file:
install -b -m 755 /dev/null newfile
into this one:
install -m755 <(echo pwd) newfile
or:
echo pwd | install -m755 /dev/stdin newfile
Where I expect to create a new newfile executable file to be created with content pwd inside.
It works on Linux, however on OS X it fails with the following error:
BSD install (/usr/bin/install)
install: /dev/fd/63: Inappropriate file type or format
GNU install (/usr/local/opt/coreutils/libexec/gnubin/install)
install: skipping file /dev/fd/63, as it was replaced while being copied
Why this doesn't work on Unix, but it works on Linux? I'm missing anything? Is there any way to bypass the above warning by using different syntax (without creating a file in separate commands and using chmod after that)?
On both environments (Linux & OS X) I've the same version of install:
$ install --version
install (GNU coreutils) 8.23
|
The BSD install found on OpenBSD systems has this piece of code in it (from src/usr.bin/xinstall/xinstall.c):
if (!S_ISREG(to_sb.st_mode))
errc(1, EFTYPE, "%s", to_name);
This emits the error
install: /dev/fd/4: Inappropriate file type or format
when it's discovered that /dev/df/4 is not a regular file. (There's a separate earlier check for /dev/null)
That was fairly straight forward.
GNU install has this code (src/install.c in coreutils):
/* Allow installing from non-regular files like /dev/null.
Charles Karney reported that some Sun version of install allows that
and that sendmail's installation process relies on the behavior.
However, since !x->recursive, the call to "copy" will fail if FROM
is a directory. */
return copy (from, to, false, x, ©_into_self, NULL);
The code emitting the error comes from src/copy.c:
source_desc = open (src_name,
(O_RDONLY | O_BINARY
| (x->dereference == DEREF_NEVER ? O_NOFOLLOW : 0)));
(a few lines omitted)
if (fstat (source_desc, &src_open_sb) != 0)
{
error (0, errno, _("cannot fstat %s"), quoteaf (src_name));
return_val = false;
goto close_src_desc;
}
/* Compare the source dev/ino from the open file to the incoming,
saved ones obtained via a previous call to stat. */
if (! SAME_INODE (*src_sb, src_open_sb))
{
error (0, 0,
_("skipping file %s, as it was replaced while being copied"),
quoteaf (src_name));
This is in copy_reg() which copies a regular file. The SAME_INODE macro evaluates to false because the inodes differ in the two stat structs *src_sb and src_open_sb. The *src_sb comes from a stat() or lstat() call on the source file name and src_open_sb from fstat() as seen above, on a newly open descriptor.
I can kinda see why opening a new file descriptor and comparing its inode to that of the file descriptor given by the shell (/dev/fd/4 in my case) will fail, but I can't put it into definite words unfortunately.
| Why does OS X `install` errors on redirected input when the same version of `install` on linux works fine? |
1,659,734,788,000 |
Is it possible to have, say, one machine with software installed and call that software from another machine? I thought this would be called an "application server" but by googling I find things that are not exactly what I want. My scenario is basically the following:
I have a computer at home with Ubuntu and a bunch of programs, under which one Latex distribution. This is setup as a server, with ssh-access, apache and the like. From my work computer, running Windows, I would like to use my Latex at home instead of installing it locally. But I would like it to store the files, in particular the generated PDF, in the Work computer (this is actually not extremely important: I can always copy the files later, but it would just save some time if it worked).
What are the possible ways to do that?
|
Yes it is possible, but there are multiple steps involved:
You must be able to reach your home computer running Linux from the internet. This means opening up port 22 (ssh) or your router at home, or a higher port if your provider blocks incoming access on ports below 1024. Then install openssh-server (and make it listen on any non-default port). You also need to know the routers IP address at home. Some routers have some functionality to update a dynamic name service. If that is not available your home computer can do that, or in the worst case send an email on a regular basis to your work address (you should be able to pull the IP address of the router from the headers of an email).
Your work computer needs to be setpu with PuTTY and an X extension. PuTTY makes the secure connection, the X entension is necessary to view the remote programs that are not commandline based. You can use Xming for that. It might be that you can just run the LaTeX commands without X, depending on which editor/environment you normally use.
PuTTY also allows you to copy files from your machine at home to your local machine.
| How can one run a program installed in one machine from another machine? |
1,659,734,788,000 |
I have a number of files showing up green when I run ls. I understand these are executables, and I understand that one can make a file executable with chmod. But they are .csv and .pdf files. I don't understand how one could 'execute' a comma-separated text file or a PDF. So:
How can they actually be 'executable'?
And how would I execute them?
And what would happen when I did?
|
This is just a question of permissions. If a file has execute permissions, that just means users are allowed to execute it. Whether they will be successful is another matter. In order for a file to be executed, the user executing it must have the right to do so and the file needs to be a valid executable. The permissions shown by ls only affect the first part, permission, and have no bearing on the rest.
For instance:
$ cat file.csv
a,silly,file
$ chmod a+x file.csv
$ ls -l file.csv
-rwxr-xr-x 1 terdon terdon 13 May 29 15:22 file.csv
This file now has execute permissions (see the 3 x in the permissions string -rwxr-xr-x). But if I try to execute it, I will get an error:
$ ./file.csv
./file.csv: line 1: a,silly,file: command not found
That is because the shell is trying to execute the file as a shell script, and there are no valid shell commands in it, so it fails.
| Executable common files (*.pdf, etc.) |
1,659,734,788,000 |
I notice that with bash scripts, some people use a different
shebang to the one that I'm used to putting at the top of my own.
Can someone simplify the difference between these two? I use the #!/bin/bash one all the time.
#!/bin/bash
#!/usr/bin/env bash
|
The #!/usr/bin/env bash results in the script using whatever bash is found first in $PATH.
While it is common for bash to be located at /bin/bash. There are cases where it is not (different operating systems). Another potential use is when there are multiple bash shells installed (newer version at an alternate location like /usr/local/bin/bash).
Doing #!/usr/bin/env bash just takes advantage of a behavior of the env utility.
The env utility is normally used for manipulating the environment when calling a program (for example; env -i someprog to wipe the environment clean). However by providing no arguments other than the program to execute, it results in executing the specified program as found in $PATH.
Note that there are both advantages and disadvantages to doing this.
The advantages are as mentioned earlier, in that it makes the script portable if bash is installed in a different location, or if /bin/bash is too old to support things the script is trying to do.
The disadvantage is that you can get unpredictable behavior. Since you're at the mercy of the user's $PATH, it can result in the script being run with a version of bash that has different behavior than what the script expects.
| What is the difference in these two bash environments? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.